The Department of War is contemplating severing its relationship with AI company Anthropic and potentially classifying it as a supply chain risk, which would force any entity conducting business with the U.S. military to discontinue their association with the AI startup.
Axios reports that a senior Pentagon official revealed that War Secretary Pete Hegseth is approaching a decision to terminate business connections with Anthropic, the company behind the Claude AI system. The designation as a supply chain risk represents an extraordinary measure typically reserved for foreign adversaries rather than domestic technology partners.
The senior official stated, “It will be an enormous pain in the ass to disentangle, and we are going to make sure they pay a price for forcing our hand like this.”
Chief Pentagon spokesman Sean Parnell confirmed the ongoing review, telling reporters that “The Department of War’s relationship with Anthropic is being reviewed. Our nation requires that our partners be willing to help our warfighters win in any fight. Ultimately, this is about our troops and the safety of the American people.”
The situation carries significant implications because Anthropic’s Claude currently holds a unique position as the only AI model operating within the military’s classified systems. The technology has already been deeply integrated into military operations, including its use during the January raid targeting Nicolas Maduro in Venezuela. Pentagon officials have consistently praised Claude’s capabilities, making the potential separation particularly complex.
The conflict stems from months of difficult negotiations between Anthropic and the Pentagon regarding the terms governing military use of Claude. At the center of the dispute lies a fundamental disagreement about acceptable applications of the AI technology. While Anthropic CEO Dario Amodei has shown willingness to relax certain restrictions and approaches these issues pragmatically, the company maintains specific boundaries for its technology’s deployment.
Anthropic seeks assurances that its tools will not facilitate mass surveillance of American citizens or contribute to the development of autonomous weapons systems that operate without human oversight. The Pentagon, however, contends these restrictions create unworkable limitations filled with ambiguous scenarios. Military officials are pushing for Anthropic and three other major AI companies — OpenAI, Google, and Elon Musk’s xAI — to permit military use of their technologies for “all lawful purposes.”
Sources familiar with the situation indicate that senior defense officials have harbored frustrations with Anthropic for an extended period and welcomed the chance to engage in public confrontation over these issues.
Privacy advocates raise concerns about the intersection of existing surveillance laws and artificial intelligence capabilities. Current mass surveillance legislation predates AI technology, and while the Pentagon already possesses authority to collect extensive information ranging from social media content to concealed carry permit data, critics worry that AI could dramatically amplify the government’s capacity to target civilians.
An Anthropic spokesperson addressed the ongoing discussions, stating, “We are having productive conversations, in good faith, with DoW on how to continue that work and get these new and complex issues right.” The spokesperson emphasized the company’s dedication to supporting national security applications of frontier AI technology and noted Claude’s pioneering status as the first AI system deployed on classified networks.
The potential designation as a supply chain risk would create widespread disruption across the defense contracting ecosystem. Numerous companies maintaining business relationships with the Pentagon would need to verify they do not utilize Claude in their operations. This requirement poses substantial challenges given Anthropic’s extensive market penetration, with the company reporting that eight of America’s ten largest corporations employ Claude in their workflows.
The forthcoming book on AI by Breitbart News Social Media Director Wynton Hall, Code Red: The Left, the Right, China, and the Race to Control AI, includes a deep discussion of how conservatives can build a position on AI that preserves our constitutional rights while also empowering the government to prevent fraud, fight the wars of the future, and take on China.
Sen. Marsha Blackburn (R-TN) has endorsed Code Red, writing: “Few understand our conservative fight against Big Tech as Hall does, and he is uniquely qualified to examine how we can best utilize AI’s enormous potential, while ensuring it does not exploit kids, creators, and conservatives. Code Red is a must-read about the important of getting this right.”
Read more at Axios here.
Lucas Nolan is a reporter for Breitbart News covering issues of free speech and online censorship.


COMMENTS
Please let us know if you're having issues with commenting.