Beyond "Adapt Faster"
How to be a better reader in the Age of AI
A post about AI went viral recently, racking up over 55 million views. (Spoiler: It’s something BIG.) The author works in the industry and describes watching AI go from helpful tool to doing his entire job better than he can. He walks away from his computer for four hours and comes back to find complex work finished. His advice: learn these tools now, get financially stable, adapt faster than everyone around you, or fall behind.
He’s right that AI is improving faster than most people realize. But the piece obscures that his urgent warnings might be shaped by his stake in rapid adoption. It also (like most AI hype content) never addresses what principles should constrain how these systems are built and deployed, or what accountability exists for social impacts. When a handful of companies control technology this powerful, the frameworks guiding development and deployment matter enormously. Right now those frameworks are almost entirely technical and commercial.
Over the last few months, I’ve been writing about what look like separate problems: Silicon Valley’s inability to tell coherent stories about the future, the collapse of meaningful political positions into aesthetic choices, the way digital spaces destroy the communities they promise to build, the return of “purpose” in business after a brief hiatus.
These aren’t separate problems. They’re symptoms of the same crisis the viral post reveals: a handful of people are building technology that will reshape society, and the principles guiding those decisions are almost entirely self-serving.
Is every story compromised?
We have multiple competing narratives about AI, and each one is either financially compromised, philosophically incoherent, or both. It’s what makes this moment so disorienting.
The hucksters tell us AI will revolutionize everything and we need to adapt now or get left behind. But they work for AI companies, invest in AI startups, or profit from AI adoption. The viral post’s author works in AI. When one critic points out that “the people with the largest megaphones are the people with the largest financial stakes,” this is what she means. Is it analysis or marketing? Genuine concern or manufactured urgency? The audience can’t tell.
The doomers warn about mass unemployment and a permanent underclass. But they often work at the same companies building these systems, or adjacent organizations. They describe catastrophic scenarios while continuing to develop the technology they claim is dangerous. The cognitive dissonance is severe.
The skeptics push back on timelines and technical claims. Jeremy Kahn makes valid points: quality metrics don’t exist for most professional work, enterprises can’t tolerate high error rates, governance systems aren’t ready. But notice what he’s debating. Whether the transformation arrives in two years or ten. Both skeptics and boosters accept the same assumption: the trajectory is inevitable and the only questions are timing and practical implementation.
The tech elite building these systems can’t articulate coherent principles for what they’re doing. As I’ve written before, ideas are treated as interchangeable tools, adopted, tested, discarded depending on what the situation demands. There’s no stable protagonist, only shifting justifications. Even sympathetic audiences sense the inconsistency.
The accelerationists say build it all, move fast, and figure it out later. The frontier is open, expansion is good, and optimization is progress. But when prediction markets turn mass unemployment into a tradable asset, when the tools being built could eliminate half of entry-level jobs within years, “figure it out later” is an inadequate framework.
None of these narratives can be trusted. Each is shot through with conflicts of interest and internal contradictions. And there’s no neutral arbiter, no shared framework for evaluation, and no trusted institution that can adjudicate between them.
This is why the moment feels so destabilizing. We’re not simply arguing about which story is right (those days are long gone). We’re navigating a landscape where every story is compromised and we have no way to tell truth from sales pitch.
What “adapt faster” leaves out
The viral post captures urgency but reveals a gap: the dominant narrative focuses entirely on means, not ends. It’s all speed, power, and competitive advantage. Learn these tools or get left behind. But left behind on a path to where?
The post describes AI that can complete days of expert work autonomously, systems that will eliminate 50% of entry-level jobs within years. But if half of entry-level jobs disappear, we’re not talking about a personal development challenge one might hack their way out of. We’re talking about a social rupture of catastrophic proportions. The post acknowledges this scale (”the single most serious national security threat we’ve faced in a century”) but responds with individual survival tactics.
What’s missing is any account of what we’re adapting toward. What kind of society do we want on the other side of this? Who decides, and based on what principles?
The frontier logic embedded in “adapt faster” assumes these questions will resolve themselves through movement and competition. But that’s exactly the framework that’s exhausted.
Why stories won’t stick
You might expect Silicon Valley to construct a better narrative. Silicon Valley’s attempts to address this have failed because the intellectual culture that made tech successful makes sustained storytelling impossible. Good stories require consistency—protagonists whose principles constrain their actions. But when ideas are treated as interchangeable tools, adopted and discarded depending on what the situation demands, there is no stable protagonist. Only shifting justifications.
This is why technology’s growing power creates resistance rather than trust. People resist outcomes, even beneficial ones, when they appear arbitrary or unmoored from principle. Power without narrative authority generates backlash that can’t be solved with better messaging. People sense when principles are contingent rather than real.
What this moment requires
The viral post is right that this is urgent. Right that AI is improving faster than expected. Right that people should engage with these tools seriously.
But “learn AI or fall behind” can’t be the extent of our response. That’s survival advice for individuals, and we need a framework for collective navigation.
I’ve written before about how we’re living through a crisis of meaning. The institutions that once gave us identity and purpose have frayed, leaving people unmoored. We make sense of the chaos by naming enemies and organizing around shared outrage instead of shared purpose.
The AI discourse shows the same pattern. Competing narratives, each one compromised. Framework-switching that prevents coherent storytelling. Individual survival tactics where collective frameworks used to be.
This is what it looks like when narrative authority collapses. We have too many stories, all competing with one another, most suspect, and all underpinned by incentives we can’t see or untangle. And in the age of engagement, the person with the biggest megaphone-of-the-moment wins by default.
Which means: we have to become better readers. Most stories about AI today are commercially or technically driven. When someone tells you the future is urgent, ask who profits from your urgency. When someone says transformation is inevitable, ask what they’re building and who benefits.
The Something Big viral post ends by saying “the future is already here, it just hasn’t knocked on your door yet.”
That might be true. But what’s arriving is a moment that requires your discernment, your ability to read these narratives for what they are and navigate accordingly.



