The Non-Proliferation Trap - How AI Safety Narratives Armed the Pentagon Against Anthropic

Authors

Anthropic and the Department of War are in a standoff over the terms under which the military can use their AI models. The company’s CEO, Dario Amodei, insists on two red lines: no mass domestic surveillance and no fully autonomous weapons. In response, Defence Secretary Pete Hegseth has designated Anthropic a supply-chain risk to national security, a label previously reserved for US adversaries, and never before applied to an American company.

The irony is hard to miss. Anthropic helped build the case that frontier AI is so powerful that it requires non-proliferation controls. Now the US government is using that very logic to argue that such a powerful technology cannot be left to a private company’s discretion.

Non-Proliferation (is) A Double Edged Sword

Anthropic and other frontier AI companies spent years comparing AI to nuclear weapons and calling for non-proliferation measures. They now find themselves in a fix. If something is truly as powerful as a nuclear weapon, a democratically elected government (even with all its limitations) has a stronger claim to control it than a corporate entity.

Ben Thompson, in his newsletter Stratechery, highlights the perils of this framing. He argues that massive incentives to build competing models make the idea of a few responsible stewards untenable in practice, and that in a world of AI proliferation, the best defence against AI is more AI, making open-source development ultimately safer than controlled access.

The non-proliferation analogy was always an attempt at regulatory capture as much as it was a safety argument. It positioned a handful of frontier labs as the responsible stewards of a dangerous technology, justifying barriers to entry that conveniently doubled as competitive moats. What the labs did not anticipate was that a government might accept the premise that AI is too dangerous to leave unchecked, while rejecting the conclusion that they should be the stewards.

On Military Uses Of Generative AI

There are claims about Anthropic’s models being used during the recent operations in Venezuela and Iran. When generative AI is described as a “country of geniuses in a data centre” that may surpass human intelligence within a year or two, it suggests that it is being used to make lethal decisions in military operations. However, the most likely use cases for generative AI in military operations today would be to augment text-, code-, analysis-, and simulation-heavy work, not serving as an autonomous entity making kill decisions.

The Department of War publicly lists the following use cases for rapid prototyping and adoption:

  • Warfighting: Command and Control (C2) and decision support, operational planning, logistics, weapons development and testing, uncrewed and autonomous systems, intelligence activities, information operations, and cyber operations
  • Enterprise management: financial systems, human resources, enterprise logistics and supply chain, health care information management, legal analysis and compliance, procurement processes, and software development and cyber security

The Department of Defence’s own Task Force Lima, which was constituted to fast-track AI adoption, acknowledges the limitations of deploying “a technology that will never be perfect” for military applications. It flags hallucinations, lack of explainability, security vulnerabilities, and limited evaluation techniques as conditioning factors, and urges leaders to deploy generative AI only in situations for which it is well-suited.

Both Anthropic’s red lines and the Pentagon’s heavy-handedness seem somewhat performative relative to what the technology actually might be used for today. The real fight does not seem to be about present use cases but about who gets to set the terms for the future.

On Being Labelled A Supply Chain Risk

The US Secretary of War, Pete Hegseth, stated on X that “no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic.” As legally untenable as that sounds, it is an existential risk to Anthropic if it is cut off from being offered on major cloud service providers such as AWS, Microsoft, and Google.

Anthropic has also portrayed itself as a responsible AI lab, and standing up to the Department of War might have been an effort to stay true to its brand. But the same narratives that helped make the case that this is too dangerous a technology to fall into the wrong hands have been turned against the hands that built it.

This seems like a pressure tactic to get the frontier AI labs to fall in line. While Anthropic could challenge the designation in court, a negotiated settlement that allows the Department of War to use it for “all lawful purposes” seems to be more likely. But the episode has established a precedent: the US government is willing to use tools designed for foreign adversaries against domestic companies.

Disclaimer: Ironically, the author has used Anthropic’s AI models for background research and copy-editing