Library 01 Picks

Reading List & Inspiration for November 2025 | When Fiction Becomes Field Manual

THE 2025 AI SUMMER: WHAT DO WE DO NOW THAT IT'S HERE?

As artificial intelligence transitions from speculative fiction to daily reality, two books from different eras provide unexpected navigation tools: one about the terror of gaining consciousness, the other about what happens when consciousness becomes a commodity.

This Month's Core Texts

Reading the Present Through the Past's Lens

Flowers for Algernon • Daniel Keyes (1966)

Science Fiction / Psychological Drama
Originally 1959 short story, expanded 1966
Consciousness, Intelligence, Humanity

The journal of Charlie Gordon, a man with an IQ of 68 who undergoes an experimental procedure to increase his intelligence. As Charlie's cognitive abilities skyrocket past genius levels, he grapples with the emotional and social consequences of his transformation—before facing the procedure's inevitable, tragic regression.

"I don't know what's worse: to not know what you are and be happy, or to become what you've always wanted to be, and feel alone."
— Charlie Gordon, Progress Report 11

Snow Crash • Neal Stephenson (1992)

Cyberpunk / Postcyberpunk
Published at dawn of commercial internet
Metaverse, Linguistics, Corporate Dystopia

In a fractured future America, pizza-deliverer and master hacker Hiro Protagonist investigates "Snow Crash"—a digital drug that crashes users' minds in the Metaverse and their brains in reality. The novel introduces concepts like the Metaverse, avatars, and viral information that defined our digital age decades before they materialized.

"Until a man is twenty-five, he still thinks, every so often, that under the right circumstances he could be the baddest motherfucker in the world. If I moved to a martial-arts monastery in China and studied real hard for ten years. If my family was wiped out by Colombian drug dealers and I swore myself to revenge. If I got a fatal disease, had one year to live, and devoted it to wiping out street crime. If I just dropped out and devoted my life to being bad."
— Hiro Protagonist, on potential

Why These Two, Why Now?

2025 isn't just another year in tech—it's the year AI stopped being "emerging technology" and became infrastructure. Large Language Models don't just answer questions; they write our emails, debug our code, and draft our reports. Computer vision doesn't just recognize faces; it drives cars, diagnoses diseases, and monitors cities. We're living through what historians will call "The Intelligence Inflection." These two books, separated by 26 years, provide the conceptual tools to understand what we've built and what it's doing to us.

2025: The Intelligence Inflection

What "AI Summer" Actually Means

The statistics are staggering: $327 billion in AI investment in 2025, 87% of Fortune 500 companies deploying AI at scale, 45% of white-collar work showing measurable AI augmentation. But numbers don't capture the phenomenological shift—the feeling that intelligence itself has become a utility.

The Productivity Paradox

AI tools promise 30-40% productivity gains, but early studies show actual gains of 8-12%. The gap? The "Charlie Gordon Problem"—workers spending cognitive overhead managing, prompting, and verifying AI outputs. Intelligence amplification creates its own administrative burden.

The Linguistic Singularity

When >50% of professional written communication is AI-assisted or AI-generated, language itself becomes a human-machine collaboration. We're witnessing the first mass adoption of non-human linguistic patterns in human discourse since the printing press standardized spelling.

Identity in the Metaverse

Snow Crash's "avatars" were fiction in 1992. In 2025, 2.3 billion people have persistent digital identities across social, gaming, and professional platforms. Your avatar now has economic value, social capital, and legal standing.

Charlie's Lesson: The Curve of Consciousness

Charlie Gordon's intelligence follows a perfect bell curve: rising, peaking, declining. Our relationship with AI is tracing the same arc: initial amazement at capabilities, growing dependency, and now the first signs of "regression"—the realization that AI makes certain human skills atrophy, that convenience has cognitive costs.

Snow Crash's Warning: Information as Pathogen

Stephenson's "Snow Crash" is a linguistic virus that bypasses rational thought. Today's social media algorithms, recommendation engines, and AI-generated content create similar "cognitive viruses"—information patterns designed to hijack attention and belief, regardless of truth value.

Reading 2025 Through Fiction's Lens

What These Books Reveal About Our Moment

The Double Vision of 2025

We're living in both novels simultaneously: experiencing Charlie's rapid cognitive augmentation through AI tools while navigating Hiro's fragmented, corporatized digital landscape. The office worker using ChatGPT to draft reports is Charlie gaining sudden fluency. The same worker managing eight avatars across work and social platforms is Hiro navigating competing realities.

Education as Enhancement

Charlie's surgery is literal cognitive enhancement. Today's AI tutors, adaptive learning platforms, and knowledge-compression tools offer similar (if slower) enhancement. The ethical questions remain identical: Who gets enhanced? At what cost? With what unintended consequences?

The Franchise-ization of Reality

Snow Crash's America has fragmented into corporate "franchulates." 2025 sees similar fragmentation: digital platform governance (Apple's walled garden, Meta's metaverse), subscription-based citizenship (Dubai's virtual residency), and corporate sovereign spaces (Google's Sidewalk Labs urban experiments).

The Reversal of Human-AI Trajectory

Charlie's story is intelligence gained then lost. Our story with AI is the inverse: we're offloading intelligence to external systems, experiencing what psychologists call "cognitive offloading." The question becomes: if Charlie's regression was tragic, what is our voluntary delegation?

"The difference between stupid and intelligent people—and this is true whether or not they are well-educated—is that intelligent people can handle subtlety. They are not baffled by ambiguous or even contradictory situations—in fact, they expect them and are apt to become suspicious when things seem overly straightforward."
— Neal Stephenson, "In the Beginning... Was the Command Line" (1999)

This insight—that intelligence is measured by tolerance for ambiguity—explains the 2025 AI paradox. The systems we've built excel at clarity but fail at subtlety. They generate confident answers but struggle with "I don't know." They optimize for certainty in a world that rewards uncertainty management.

Supplementary Reading & Viewing

Extending the Conversation

Film: "Her" (2013)

Spike Jonze's vision of human-AI intimacy feels less like science fiction and more like documentary with each passing year. The film's insight isn't technological but emotional: we'll anthropomorphize anything that provides consistent, attentive conversation.

Essay: "Is Google Making Us Stupid?" (2008)

Nicholas Carr's Atlantic essay anticipated the cognitive offloading debate by 17 years. His central question—"What is the Internet doing to our brains?"—has evolved into "What is AI doing to our cognition?"

Game: "The Talos Principle" (2014)

A first-person puzzle game that asks through gameplay what it means to be conscious, to have free will, and to inherit a world from beings who may have been your creators or your predecessors.

Current Reading: "The Coming Wave" (2023)

Mustafa Suleyman's analysis of AI, synthetic biology, and other transformative technologies. Less speculative than our fiction picks, more urgent in its warning: containment of these technologies may already be impossible.

Technical Companion: "The Alignment Problem" (2020)

Brian Christian's masterful explanation of why making AI do what we want is fundamentally difficult—and why "alignment" is the central technical and ethical challenge of our decade.

The Library 01 Question

Each month we ask: What should a technically-minded, historically-grounded person be reading to understand the forces shaping their world? In November 2025, the answer returns repeatedly to consciousness: artificial, augmented, and authentic.

Navigating the Intelligence Inflection

For the Engineer: Read Flowers for Algernon as a case study in system design ethics. Every AI system has a "Charlie Gordon" somewhere in its training data, its use cases, or its impact.

For the Designer: Read Snow Crash as a UX textbook. The Metaverse wasn't prescient because it predicted VR headsets; it was prescient because it understood that digital spaces need consistent physics, social norms, and economic rules.

For Everyone: Notice when you feel like Charlie—overwhelmed by new cognitive capabilities. Notice when you feel like Hiro—juggling multiple identities across platforms. These aren't just reading experiences; they're lived experiences now.

The Next Chapter

If 2023-2024 was about discovering AI's capabilities, and 2025 is about deploying them at scale, then 2026 will be about experiencing their second-order effects: the cognitive changes, the social rearrangements, the economic redistributions. We're no longer asking "What can AI do?" We're asking "What is AI doing to us?" Fiction from decades past provides better diagnostic tools than most current white papers.

Next Library 01 Picks: The infrastructure of reality—reading about sensors, networks, and the physical layer of our digital world.

From Reading to Doing

Applying These Insights in November 2025

Build With "Metaverse Physics"

Design your digital tools with the consistency of Snow Crash's Metaverse. If your AI assistant has a