Descriptions:
Matthew Berman breaks down an escalating confrontation between the U.S. Department of Defense and Anthropic over the conditions under which Claude models can be used in military operations — a dispute that has significant implications for every AI lab with government contracts.
The core conflict: the Pentagon is demanding Anthropic remove all guardrails from its models for use in classified military workflows, specifically the restrictions on fully autonomous weapons and mass domestic surveillance. Anthropic, which signed a $200 million DoD contract in July 2024 and was among the first AI labs to deploy models on classified networks via a Palantir integration (including use in the Venezuela Maduro operation), has refused. In retaliation, Defense Secretary Pete Hegseth threatened to designate Anthropic as a supply chain risk — a classification previously reserved exclusively for foreign adversaries that would prohibit other DoD vendors from using Anthropic products — cancel the contract, and potentially invoke the Defense Production Act of 1950 to compel access to unrestricted models.
Berman analyzes Dario Amodei’s public letter defending Anthropic’s stance on two grounds: ethical principle and technical reality — frontier models still hallucinate, and a single error in an autonomous weapons context could cost lives. The video also unpacks the contradictions in the DoD’s position, which simultaneously claims it would never misuse the capabilities it’s demanding while threatening Anthropic as both a security risk and an essential national security asset.
📺 Source: Matthew Berman · Published February 28, 2026
🏷️ Format: News Analysis







