Anthropic AI partnership poses unacceptable risk, says Pentagon
The U.S. Department of Defense calls Anthropic AI collaboration a national security risk due to restrictions on military use, sparking legal challenges and industry support.
The U.S. Department of Defense calls Anthropic AI collaboration a national security risk due to restrictions on military use, sparking legal challenges and industry support.
© RusPhotoBank
The U.S. Department of Defense has stated that collaboration with Anthropic poses an "unacceptable risk" to national security. This position was outlined in legal documents responding to a lawsuit from the company itself, which is challenging its designation as a "supply chain risk."
The conflict stems from Anthropic's refusal to allow its artificial intelligence models to be used for mass surveillance or autonomous weapons development. The Pentagon noted that AI technology contracts typically permit use for any lawful purpose, but the company's stance has raised doubts about its reliability as a partner for sensitive military projects.
The department also emphasized that AI systems are vulnerable to manipulation, and a developer could theoretically alter a model's behavior or restrict its functionality at a critical moment. According to the Defense Department, such dependence on a private company's decisions is unacceptable in military operations, which formed the basis for imposing restrictions.
Previously, as documented in the case, the U.S. presidential administration ordered federal agencies to stop using Anthropic's technologies. The company is seeking a temporary suspension of this decision through the courts, citing potential multi-billion dollar losses. Meanwhile, several major tech companies, including Microsoft, Google, and OpenAI, have expressed support for Anthropic.