Anthropic Just Defied the US Military

Anthropic Just Defied the US Military

More

Descriptions:

Matt Wolfe provides a detailed account of the conflict between Anthropic and the US Pentagon over the conditions governing Claude’s use in military operations. The dispute traces back to July 2025, when Anthropic signed a $200 million Department of Defense contract—making Claude, integrated via Palantir, the only AI model cleared for classified government networks.

The situation escalated sharply after Claude was reportedly used during the January 2026 US military operation that captured Venezuelan dictator Nicolás Maduro, an action involving more than 150 aircraft and resulting in at least 83 casualties according to Venezuelan defense officials. When an Anthropic employee allegedly asked Pentagon contacts whether Claude was used in the operation, Defense Secretary Pete Hegseth threatened to designate Anthropic as a supply chain risk—a classification normally reserved for foreign adversaries like Chinese and Russian companies—which would force any business seeking DoD contracts to cut ties with Anthropic entirely.

At the core of the standoff are two usage restrictions Anthropic refuses to waive: prohibitions on mass AI-enabled surveillance of US citizens, and a ban on fully autonomous weapons systems with no human in the loop. Wolfe walks through Anthropic CEO Dario Amodei’s essay “The Adolescence of Technology,” in which Amodei outlined four specific ways AI could enable authoritarian control and argued that democracies must draw hard lines against those uses. The video is essential viewing for anyone tracking how AI safety commitments are now colliding directly with defense contracting realities.


📺 Source: Matt Wolfe · Published February 18, 2026
🏷️ Format: News Analysis

1 Item

Channels

2 Items

Companies