The Headline
Source: Fast Company
Translation: AI is not a strategic upgrade. It is a diagnostic instrument, and most organizations are about to discover that what they thought was strategy was actually inertia dressed in PowerPoint.
What’s Actually Happening
The argument is structurally simple and organizationally devastating: AI amplifies existing organizational logic. It does not replace it. Companies with clear strategy, integrated data, and aligned incentives will use AI to accelerate their advantages. Companies without those things will use AI to scale their confusion faster, cheaper, and with greater fluency than ever before.
The mechanism is not mysterious. Large language models generate statistically plausible outputs. They optimize for the signals you give them. If your data is fragmented, AI surfaces the fragmentation at scale. If your incentives are misaligned, AI optimizes the wrong outcomes. If your strategy is vague, AI produces beautifully written vagueness. The tool does not import intelligence into the organization. It interacts with whatever intelligence (or absence of it) already exists there.
The historical parallel holds: the internet punished companies that treated it as a brochure, mobile punished those that clung to desktop assumptions, cloud punished firms that confused owning hardware with building capability. AI goes further because it operates at the level of cognition itself in every domain where organizations make consequential decisions.
The Distortion
The dominant distortion in corporate AI discourse is the framing of AI adoption as a strategic act. Announcing an AI-first agenda, reorganizing divisions around new tools, publishing an AI roadmap, none of which constitute strategy. They merely constitute activity. And AI, faithfully, will scale that activity while leaving the underlying absence of strategy entirely intact.
The subtler distortion is the efficiency narrative. Cost-cutting and headcount reduction are the most legible AI stories available to executives: measurable, quarterly, translatable into earnings language. They are also, the author argues, the least strategically significant application of a general-purpose technology. The productivity paradox of the IT era (Solow’s observation that computers were visible everywhere except in productivity statistics) resolved not through efficiency gains but through organizational redesign, skill development, and new business models that took years to materialize and were poorly captured in early data. AI will follow the same trajectory. The organizations optimizing for short-term optical clarity are trading structural advantage for a number that looks good this quarter.
The deepest distortion is the assumption that intelligence can be imported. Organizations are not empty vessels. They are complex systems of incentives, legacy processes, tacit assumptions, fragmented data, and political equilibria. AI enters that system and interacts with it. Fluency is not coherence. Activity is not strategy. Shared tools do not produce shared judgment.
The Incentive
For technology vendors, the incentive is to frame AI as an additive capability (something that improves whatever it touches) because the alternative framing, that AI will expose organizational dysfunction, is not a product pitch. No one buys a diagnostic that tells them their strategy is PowerPoint-deep. They buy a productivity engine. The marketing precedes the reckoning.
For executives, the incentive is narrative legibility. AI-first announcements signal modernity. Efficiency programs produce measurable results. Strategic introspection produces discomfort and does not translate into next quarter’s earnings call. The organizations that default to cost-cutting narratives are not being irrational — they are responding rationally to a set of incentives that rewards short-term optics over long-term structural investment.
For boards, the incentive is accountability avoidance. Demanding an AI roadmap is a way of demonstrating governance without requiring the harder conversation about whether the organization has a coherent theory of how it creates value. The roadmap substitutes for the strategy it was supposed to serve.
For the handful of organizations genuinely doing this well the incentive is compounding advantage. If AI amplifies existing organizational logic, then organizations that have done the hard work of building clear strategy, integrated data, and aligned incentives will accelerate away from those that haven’t. The gap between them will not be visible in early adoption metrics. It will be visible in outcomes, three to five years from now.
The Consequence
The near-term consequence is a proliferation of AI initiatives that produce local ROI metrics, cost savings, and press releases while leaving the strategic architecture of the organization unchanged. Confusion gets automated. Misalignment gets optimized. Vagueness gets scaled. The organization moves faster in the wrong direction, with better prose.
The structural consequence is a new form of competitive stratification. Infrastructure is commoditizing rapidly — foundation models are widely accessible, cloud is shared, open-source ecosystems evolve at speed. As infrastructure becomes common, differentiation moves to the organizational layer: who learns fastest, who updates beliefs systematically, who treats AI outputs as hypotheses rather than answers. That competition is invisible in early adoption metrics but decisive in long-term outcomes.
The consequence for executives who miss this is not that their AI initiatives fail. It is that their AI initiatives succeed…at automating the assumptions that were already limiting them. They will have moved faster, more efficiently, and with greater scale toward outcomes their strategy was never designed to produce. The J-curve will eventually make the divergence legible. By then, the structural gap will be difficult to close.
The consequence for organizations that get it right is the inverse: AI becomes not a cost center or a productivity program but an institutional learning system compressing feedback cycles, surfacing anomalies, testing counterfactuals, and systematically updating the beliefs on which decisions are made. That is a different kind of organization than most currently exist.
The Calibration
The author’s most useful reframe is also his simplest: the right first question is not “How can AI improve this process?” It is “What assumptions are embedded in this process, and what happens if they no longer hold?”
That question is uncomfortable precisely because it forces organizations to confront contradictions they have long managed to ignore. Fragmented data architectures reflect years of underinvestment in integration. Contradictory KPIs signal governance failure. Inconsistent AI outputs expose cultural fragmentation. AI did not create these problems. It illuminated them. The discomfort is the point.
The calibration for executives is to resist the seduction of the efficiency narrative long enough to ask what they actually believe about how they win, and whether they are prepared for AI to challenge that belief. That is not a technology question. It is a strategic one. And it is the question most AI roadmaps are carefully designed to avoid.
The broader calibration is historical: general-purpose technologies do not deliver their true value through simple efficiency programs. They deliver it through organizational redesign that is intangible, slow, and poorly captured in early metrics. The companies that will accelerate in the AI era are not those who automate fastest. They are those who learn fastest and who have built the organizational conditions in which learning is possible.
AI will not replace strategy. But it will make the absence of one impossible to hide. That is not a technology risk. It is a leadership one.
Next calibration: 1 pm (GMT). Stay sharp.



