The Headline

Source: The Guardian

This is not a story about a principled company drawing an ethical line. It is a story about where the line has moved and how far every company in the industry has already traveled to get here.

What’s Actually Happening

Anthropic has sued the Department of Defense, claiming that being blacklisted from government contracts after refusing to permit “any lawful use” of its AI violates its First Amendment rights. The two specific uses Anthropic objects to are domestic mass surveillance and fully autonomous lethal weapons. Everything else (i.e. target selection, threat analysis, classified military operations, handling of sensitive intelligence) Anthropic has already agreed to. Claude Gov, the military version of Anthropic’s model, is explicitly less restrictive than the civilian version. The government has reportedly used it for target selection in bombing campaigns against Iran.

The fight, in other words, is not about whether Anthropic’s AI is a weapon. It is about which two uses of the weapon cross a line that 98% of uses do not.

The industry backdrop is the real story. Less than a decade ago, over 3,000 Google employees signed an open letter stating that Google should not be in the business of war, and the company dropped its Project Maven drone surveillance contract in response. That contract was taken over by Palantir. Project Maven is now the name of the classified system through which military personnel access Anthropic’s Claude. The circle has closed entirely, and the companies that drew the original lines are now operating inside them.

The Distortion

The primary distortion is the framing of Anthropic’s lawsuit as a principled stand. It is a principled stand, but the principle being defended is a narrow residual boundary within an already extensive military partnership. Anthropic has modified its model for military use, agreed to classified operations, accepted targeting and threat analysis applications, and sued not to exit the relationship but to preserve the specific terms under which it continues. Describing this as a confrontation between safety-focused AI development and military overreach obscures that the confrontation is taking place well inside territory that would have been unthinkable to the Google employees who launched the Project Maven protest in 2018.

The secondary distortion is Dario Amodei’s framing. His January essay and subsequent public statements position Anthropic as a reluctant but responsible actor in the military AI space; worried about autonomous systems, mass surveillance, and the concentration of lethal power, but committed to arming democratic governments against autocratic adversaries. The logic is internally coherent. It is also a justification for the same trajectory every other major AI company has followed, articulated with more philosophical texture. The distinction between Anthropic and OpenAI, or Anthropic and Google, is increasingly about aesthetics and residual red lines, not about the fundamental question of whether powerful AI models belong inside military targeting systems.

The deepest distortion is the White House’s characterization of Anthropic as “a radical left, woke company.” This framing is politically useful and factually absurd. A company that has agreed to classified military use, target selection applications, and a modified model designed explicitly to be less restrictive in military contexts is not a pacifist organization. It is a defense contractor with two conditions. Calling it radical is a negotiating tactic, not a description.

The Incentive

For Anthropic, the incentives are layered and in partial tension. The company needs government revenue (the DoD contract market for AI is enormous and growing, and being blacklisted from it is an existential competitive risk). It also needs to maintain the safety-first brand identity that differentiates it from OpenAI and Google in the eyes of researchers, regulators, and the public. The lawsuit is simultaneously a legal defense of its contract position and a public relations assertion of its founding principles. Both things are true, and both are being served by the same action.

For the Pentagon, the incentive is precedent. Accepting Anthropic’s conditions (no mass surveillance, no fully autonomous lethal weapons) creates a contractual constraint on military AI use that no other vendor has imposed. If Anthropic wins this fight, it becomes the template other companies might point to. If the DoD wins, it establishes that government contracts can override vendor safety conditions, which reshapes every AI company’s calculation about what conditions they can sustainably impose. The blacklisting is not primarily about Anthropic. It is about what the precedent costs.

For Google and OpenAI, the incentive is competitive positioning through Anthropic’s discomfort. While Anthropic is in litigation, Google announced this week that it would provide Gemini to the military for AI agent development. OpenAI secured a classified systems deal on the same day Anthropic was declared a supply chain risk. The companies that dropped their restrictions earliest are now best positioned to capture the contracts that Anthropic’s lawsuit puts in jeopardy.

For the broader tech industry, the incentive structure has shifted so completely that it is worth naming: the financial, political, and competitive rewards for military AI partnerships now substantially outweigh the reputational and internal costs. The 3,000 Google employees who protested in 2018 were operating in a different incentive environment. Employee activism has been suppressed, union organizing in tech remains limited, and the companies that moved earliest into defense contracting (e.g., Palantir, Anduril) have been rewarded with growth, relevance, and political influence. The direction of travel is not ambiguous.

The Consequence

The immediate consequence of Anthropic’s lawsuit is a legal clarification of whether government contracts can compel vendors to remove safety conditions from AI systems. That question has no settled answer, and the outcome will shape every AI defense contract negotiation that follows.

The structural consequence is the normalization of AI in military targeting systems before the governance frameworks that would define acceptable use exist at any meaningful level. The Harvard and Stanford researchers whose work we covered last week identified urgent questions about accountability, legal responsibility, and controllability in agentic AI systems. Those questions become considerably more urgent when the systems in question are being used to select bombing targets. The accountability vacuum in enterprise identity architecture is one problem. The accountability vacuum in military targeting architecture is another category of problem entirely.

The consequence for the residual employee resistance that defined the earlier era is already visible. Google fired over 50 employees for protesting its military contracts in 2024. Sundar Pichai’s memo stated that Google is a business and not a place to fight over disruptive issues. The organizational infrastructure for internal dissent has been systematically dismantled at the same time the contracts that would generate dissent have expanded. What remains of the ethical resistance is now located almost entirely at the CEO level (Amodei drawing lines rather than employees) which means it is subject to negotiation, litigation, and competitive pressure in ways that mass employee action is not.

The longer-term consequence is the one Margaret Mitchell at Hugging Face names most honestly: there are no good guys in this frame if the criterion is not supporting war. Anthropic is fighting over two conditions within an extensive military partnership. The fight is real and the conditions matter. But the territory that has already been conceded — by Anthropic, by Google, by OpenAI, by the industry as a whole — is larger than the territory being contested. The line has moved. The lawsuit is about where to draw it now, not about whether to draw it at all.

The Calibration

The honest read of this moment requires holding two things simultaneously: Anthropic’s two red lines are worth defending, and the ground between where those lines sit and where the Google employees drew them in 2018 represents an enormous distance that has been traveled with very little public deliberation.

Domestic mass surveillance and fully autonomous lethal weapons are meaningful boundaries. They are also boundaries that leave a vast operational space (think target selection, threat analysis, classified intelligence processing, and military decision support) already occupied by AI systems operating without clear accountability frameworks, legal precedent, or public understanding of what they are actually doing.

The calibration for observers is to resist the binary that the lawsuit invites: Anthropic as principled defender versus DoD as overreaching aggressor. The more accurate frame is two parties negotiating the specific terms of a relationship that both want to continue, in a dispute that is simultaneously about contractual conditions, legal precedent, and competitive market position.

The calibration for the industry is the one Amodei’s own framing inadvertently provides: when a company founded on AI safety has modified its model to be less restrictive for military use, agreed to targeting and threat analysis applications, and is fighting not to exit the relationship but to preserve two specific conditions within it — the question is not whether big tech has reversed course on AI and war. The question is how to honestly describe where the course now runs.

The answer, as of March 2026, is: through the classified military systems of every major AI company in America, with two exceptions, currently in litigation.

Next calibration: 1 pm (GMT). Stay sharp.