AI Is Going to Disrupt Jobs. Eventually.
Maybe it's more turtle than hare.
I’m writing this on a four-hour Amtrak ride back home, watching the Midwest roll past the window. My mind finally has time to wander. I realized something has been weighing on me more than I’ve wanted to admit.
Every time another viral piece of content about AI destroying the workforce makes the rounds, I feel a small knot in my stomach.
I have a daughter who will enter the workforce in the 2030s. And I would like to retire sometime around 2040.
Those timelines make the current wave of AI predictions feel a little more personal.
Some of the essays circulating right now paint a bleak picture of the next few years. Entire professions disappearing. Corporations run by autonomous agents. Office work collapsing within a decade.
One dystopian scenario, 2028, imagines a wave of AI-driven unemployment sweeping across the global economy. The piece clearly struck a nerve, collecting nearly 8,000 likes on Substack. Andrew Yang has made a similar argument, suggesting that advances in artificial intelligence could ultimately bring about the collapse of traditional office work.
These are serious arguments, and hard to dismiss because they seem so damn plausible.
But the more I look at the research from organizational science and knowledge management, and reflect on my own hard-earned experience in the corporate trenches, the more I think these predictions share a hidden assumption that doesn’t hold up in the real world.
They assume knowledge work is mostly explicit and codified. Decades of organizational research show that much of the knowledge inside companies is tacit, experiential, and socially learned. That makes established organizations far harder to “agentify” than current forecasts assume. Not permanently harder. But harder in ways that stretch the disruption timeline well beyond what anyone is currently selling.
My argument is not that the job apocalypse isn’t coming. It’s that it’s on backorder.
Agents Can’t Read Minds
One of the foundational ideas in knowledge management comes from philosopher and scientist Michael Polanyi.
His most famous line is just seven words:
“We know more than we can tell.” (Polanyi)
Polanyi was describing what researchers now call tacit knowledge: knowledge that is non-verbalized, intuitive, and difficult to codify.
Examples are everywhere once you start noticing them.
An engineer recognizing that a design will create operational problems later, even though the code technically matches the spec today.
A leader sensing the political dynamics inside a meeting.
A customer support person recognizing that a frustrated caller does not actually need more troubleshooting steps, but reassurance, empathy, and someone willing to go off script.
A fair objection here is that modern AI systems don’t learn only from written text. Reinforcement learning, agent-based simulation, and feedback loops allow AI to acquire something closer to experiential knowledge. But even granting that, it addresses only part of the problem. The deeper barrier isn’t whether AI can develop judgment in isolation. It’s whether that judgment can operate inside organizations that are fundamentally social and political structures, built on trust, reputation, and unwritten rules accumulated over many years.
Organizations Are Messy Human Systems
Political scientist Herbert Simon observed that organizations operate under bounded rationality: decisions emerge through negotiation, incomplete information, and institutional constraints. That framing raises uncomfortable questions for anyone betting on a near-term agentic AI revolution.
Consider IBM’s rollout of Watson Health. IBM invested heavily in the premise that AI could transform clinical decision-making, a domain rich in explicit data. What they encountered instead was a system that struggled to navigate the informal knowledge structures of actual hospital environments: physician hierarchies, institutional politics, and the kind of contextual judgment that experienced clinicians build over years. By 2021, IBM was selling off the unit at a significant loss. The technology wasn’t the limiting factor. The organizational environment was.
That story isn’t an anomaly. It’s a preview of the challenges that await any aggressive attempt to automate knowledge work inside established organizations.
The unwritten rules. The quiet etiquette that everyone knows but no one says. When to escalate. When to let something slide. These are not technical problems. They are social ones.
We’ve Seen This Movie Before
Technological revolutions often go through a phase where investment runs well ahead of real adoption.
During the late 1990s dot-com boom, telecommunications companies spent more than $500 billion ($1 trillion in today’s dollars) building fiber-optic infrastructure based largely on projections about future internet traffic. Much of it sat unused. The industry coined a term for it: dark fiber.
The internet did eventually grow into that infrastructure and restructure enormous parts of the economy. But it took nearly two decades. And most workers who were mid-career when Netscape launched in 1994 retired with their careers largely intact.
The current AI boom shows similar dynamics. Technology companies are investing massive sums into GPUs, data centers, and energy infrastructure, through partnerships involving Microsoft, OpenAI, NVIDIA, Meta, and others, justified largely by projections about future demand rather than current productivity gains. That’s partly a bet on the future. But it’s also worth noting who is placing the bet. If you sell chips, cloud infrastructure, or AI tools, the story that AI will soon transform the entire economy is an extraordinarily convenient one.
Big technological narratives and financial self-interest seem to travel together.
What makes AI particularly good at fueling these narratives is that it’s impressive enough to spark the imagination, but ambiguous enough that its limits are hard to challenge. Dark fiber, at least, was just fiber. You could see it sitting unused. AI’s unrealized potential is much easier to keep selling.
Some Organizations Will Agentify Soon
Where AI may reshape organizations most dramatically is in companies that don’t exist yet, or are in very early stages. This is worth taking seriously because it’s where the more credible version of the disruption argument lives.
Startups have no years of accumulated tacit knowledge to untangle, no legacy systems, no cultural dynamics, no undocumented workarounds baked into muscle memory. They can design everything around agents from the beginning. AI-native organizations will likely be very flat: a handful of humans providing oversight and judgment while agents handle much of the execution underneath.
The concern this raises is real. If AI-native startups outcompete established organizations, jobs don’t disappear from within companies so much as the companies themselves lose ground. That is still disruptive.
But that process unfolds over decades of market share erosion, investment cycles, and organizational turnover. It is not a year- or two-out displacement event. It looks more like the slow replacement of department stores by e-commerce than a wave crashing over the economy all at once.
Established companies will adapt too, but incrementally, and with far more humans-in-the-loop than the more dramatic forecasts suggest.
In The End
The continued importance of humans in organizations is not wishful thinking. It is grounded in decades of research (and reality) on how organizations actually work, how knowledge actually moves, and how slowly even well-resourced organizations change.
The gap between what AI can do in a demo and what it can do inside a messy, politically complicated, legacy-laden organization is real and large. That gap will close. Gradually. Unevenly. With false starts, “dark fiber” moments, and bold predictions that age poorly.
By the time my daughter enters the workforce in the 2030s, AI agents will likely be part of her daily working environment. They may handle significant portions of tasks that humans perform today. But the organizations she joins will still be built around human judgment, human politics, and human relationships, because that is what organizations have always required, and the research gives us little reason to think that changes quickly.
The disruption is coming. Maybe it’s more turtle than hare.


Living in uncertainty is difficult. We see this in product model re-orgs that we do. We're asking people to go from "here's my plan, which I control with my skillset" to "here are my bets based on discovery, but ultimately I'm operating in the domain of uncertainty, and I need a new bag of tricks."
This same mechanism is writ large with the AI VUCA/FUD that's in every conversation, annoyingly sometimes, in 2026.
There will be changes. The ones that figure that out (and new companies and generations have the benefit of, say, taking open-book calc tests with graphing calculators) won't see it as a disruption, or they'll seize the day, depending on where in the timeline they are.
As for us mid-career folks, relying on first principles and my liberal arts education has always done me well. I'm a critical thinker capable of synthesizing diverse information into a decision. These are the tools I'm sharpening. Combine this with staying on top of things, and I think we'll make it out OK.