The Headline

Source: The Guardian

Translation: Amazon isn’t deploying AI to make work better. It’s deploying AI to make the workforce smaller, and using the productivity narrative to manage the optics of that transition while it happens.

What’s Actually Happening

More than half a dozen current and former Amazon corporate employees across engineering, design, research, and supply chain told the Guardian that Amazon is mandating AI use across all aspects of work tracking adoption through management dashboards, tying it to promotion criteria, and pressuring employees to demonstrate AI fluency regardless of whether the tools are fit for the task. Many say the tools are half-baked, frequently hallucinate, generate flawed code, and add work rather than remove it.

One engineer described her job as “trying to AI my way out of a problem that AI caused.” Several were laid off shortly after speaking out.

The pressure is structural. Amazon has cut 30,000 corporate workers in four months (nearly 10% of its corporate workforce). CEO Andy Jassy has publicly predicted AI-driven productivity gains will reduce headcount further, while simultaneously denying that current layoffs are AI-driven. The company is spending $200 billion on AI infrastructure this year and has committed $50 billion to OpenAI. The internal logic, as one former product manager put it, is not subtle: “If you say you automated away two hours of someone’s job, you need to convert that into savings on that job title. That’s the unspoken math.”

The Distortion

The primary distortion is the productivity narrative itself.

Amazon’s official position is that AI tools help employees work more efficiently and automate time-consuming tasks. The employee accounts tell a different story: tools deployed from hackathons, code full of errors requiring senior engineer review, 13-hour outages linked to AI-generated changes, and workers spending more time vetting AI output than they would have spent doing the work themselves. Efficiency is the frame. The reality, for many of these workers, is the opposite.

The secondary distortion is the voluntarism language. Amazon’s spokesperson says the company does not mandate AI tool use and does not instruct managers to consider AI utilization in evaluations. Simultaneously, managers have dashboards tracking team AI adoption with targets of 80% weekly usage, promotion documents now include questions about AI leverage, and employees report being told daily by their principal engineers to increase their numbers. The gap between official policy and operational reality is not ambiguous. It is the policy.

The deepest distortion is the innovation framing. Quarterly hackathons repositioned as generative AI hackathons, internal tools deployed as experiments, “learn-as-you-work” as the official training philosophy all frame what is functionally a large-scale workforce restructuring as a culture of curiosity and experimentation. The workers living inside it describe something closer to surveillance, coercion, and the systematic transfer of their tacit knowledge to systems designed to replace them. One engineer put it plainly:

“Part of my new job role, it feels like, is being asked to train the AI to essentially replace you.”

The Incentive

For Amazon’s leadership, the incentive is the unspoken math made legible. At $200 billion in AI infrastructure spending this year, the investment requires a return. The most immediate return available is headcount reduction. Every hour of work automated is a cost that can be removed from the payroll. The productivity narrative manages the optics of that transition; it frames replacement as augmentation, surveillance as support, and coercion as opportunity.

For managers, the incentive is self-preservation. In an environment where AI adoption is tracked, reported upward, and implicitly tied to team performance evaluations, pushing AI use is rational career behavior regardless of whether it produces genuine productivity gains. The dashboard metric is what gets measured. What gets measured is what gets managed. Whether the code is good is a slower, harder signal.

For the remaining workforce, the incentive structure has been deliberately narrowed. Embrace AI enthusiastically, demonstrate adoption, and survive the next round of cuts. Question the tools, document their failures, or organize around concerns, and risk being filtered out. One employee’s description of the promotion template says it clearly: the company wants to keep the people who support the investment and filter out those who don’t. That is not a productivity program. It is a loyalty test administered through tooling metrics.

For the broader labor market, Amazon’s incentive matters beyond its own walls. The second-largest employer in the US has historically exported its management practices ( i.e. warehouse surveillance, performance optimization, algorithmic management) to industries far beyond its own. What Amazon normalizes for white-collar workers will not stay at Amazon.

The Consequence

The immediate consequence is a workforce producing lower-quality output under higher surveillance, in a state of sustained anxiety about replacement. Code reviews full of errors, AI-generated outages, engineers spending more time correcting AI mistakes than building. These are the predictable result of mandating tool adoption faster than the tools are ready and faster than workers can develop genuine competency in using them.

The structural consequence is a knowledge transfer problem that compounds over time. Early-career engineers who offload their learning to AI are not developing the judgment required to identify when AI output is wrong. Sarah’s concern (that using AI is stunting her learning curve) is not sentimental. It is a systems observation. The human capacity to audit AI output depends on having developed the underlying skill independently first. Organizations that skip that development phase are not building AI-augmented workforces. They are building workforces dependent on AI they cannot reliably evaluate.

The surveillance consequence is the one Nick Srnicek identifies as structural rather than incidental: deploying AI tools at scale requires detailed knowledge of personal workflows, and that knowledge flows upward to management. The result is more than tracking AI usage. It is a granular visibility into how every worker thinks, works, and spends their time. This level of oversight has previously been impossible for white-collar roles and that shift changes the nature of the employment relationship fundamentally.

The longer-term consequence is the one Jack identifies in Jassy’s all-hands comments: when endless growth is no longer available as a profit mechanism, the remaining lever is squeezing more from fewer people. AI is the instrument of that squeeze. The workers who remain after the current round of cuts will be expected to produce more, faster, with less support, in an environment of continuous monitoring. The productivity gains AI was supposed to deliver will instead be absorbed as margin — not returned to workers as reduced workload or to customers as lower prices, but captured as the financial justification for the headcount reductions that preceded them.

The Calibration

The honest read of what Amazon is doing is not a technology story. It is a labor story with a technology instrument.

The productivity narrative exists to manage a transition that would otherwise require a harder public conversation: a company with $200 billion in AI commitments and a structural incentive to reduce its largest cost (i.e. labor) is using adoption mandates, surveillance infrastructure, and career pressure to accelerate that reduction while maintaining the appearance of innovation investment.

The calibration for workers, inside and outside Amazon, is to distinguish between AI adoption that genuinely extends their capability and AI adoption that primarily serves as a documentation system for their eventual replacement. Those are not the same thing. The difference is visible in whether the tools are chosen by workers for specific tasks or mandated by management for all tasks; whether adoption is measured by outcomes or by usage metrics; and whether the knowledge generated by human-AI interaction flows back to workers or upward to management dashboards.

The calibration for the broader economy is the one Amazon’s size makes unavoidable. When the second-largest employer in the US normalizes AI adoption tracking, promotion criteria tied to tool usage, and the systematic transfer of worker knowledge to replacement systems ( all framed as “innovation culture”) those practices do not stay inside Amazon. They become the benchmark against which other large employers measure their own AI programs.

The supply chain engineer who said you don’t look at a problem and ask “how do I use this hammer? You ask whether the problem needs a hammer at all” was not resisting technology. She was describing what competent professional judgment looks like. The fact that this observation cost several of her colleagues their jobs is the most precise measurement available of what Amazon is actually optimizing for.

It is not productivity. It is compliance.

Next calibration: 1 pm (GMT). Stay sharp.