Sooner or Later, Anthropic is Likely to Fall in Line

Authors

Long gone are the predictable days of the Cold War, wherein invariably the US government and military machinery would produce innovation and then spin it out to the private sector. The internet or commercial space activities are some prominent examples.

Back to the present, the centre of gravity for technological innovation is shifting towards the private sector. Frontier AI models are emblematic of this shift. Hence, it is the leading private AI labs that are trying to set the terms for the military’s use of technology.

That is one way of looking at the currently unfolding Anthropic-OpenAI-Pentagon saga. And I will circle back to it towards the end of this blog.

But first, let us examine another angle: competition between frontier AI labs of navigating the classified US systems, a lucrative, dependable multi-year market.

The story has unfolded something like this: Anthropic, whose product Claude is being used in US classified systems, pushed for two exceptions to use of its AI model in negotiations with the Pentagon. First, domestic mass surveillance. Second, fully autonomous weapons. At some point in February 2026, the negotiations soured and there was a public spat between Anthorpic’s Amodei and Pentagon’s Hegseth (Trump also weighed in).

In the middle of this, and incidentally on the day of the strike on Iran by the US-Israel combine, OpenAI announced reaching a deal with the Pentagon on use of their tech in classified systems.

They claimed that the Department of War agreed to one more guardrail in addition to what Anthropic was pushing for: ‘No use of OpenAI technology for high-stakes automated decisions (e.g. systems such as “social credit”).’

OpenAI made the case that their agreement is even better.

Anthropic on the other hand has been labelled a supply chain risk by the Pentagon — making it difficult for the company to do with Pentagon contractors.

Per New York Times, ‘strong personalities, mutual dislike and a rival company unraveled a deal.’ But irrespective of what factors led to the bitter falling-out, Claude is being used in the Iran projectile war and is expected to be used for months until it is phased out. This is despite Anthropic being labeled a supply chain risk (Anthropic has challenged this label in US courts).

My view on this unfolding saga is that sooner or later, Anthropic is likely to fall in line, and seek peace with the Department of War (pun unintended). The first reason is the lucrative market that is the US government for any technology player in Silicon Valley — from Google to AI labs. The massive valuations of AI labs is already being called a bubble by some experts; losing a stable market would add to the economic woes of Anthropic when OpenAI and even Elon Musk’s xAI are working with the Pentagon. Second has to do with the monopoly over violence that a nation-state enjoys, as explained in detail by Noah Smith in a recent blog. The position of dictating terms to the security establishment would be difficult to sustain for any private US company.