AI ethics dispute: Pentagon and Anthropic clash over Claude model rules

The U.S. Pentagon and AI developer Anthropic are on the brink of conflict over the terms of using the Claude model. The dispute isn't about the technology itself, but rather the rules the company imposes on its military applications.

The Pentagon demands that any artificial intelligence systems it works with must be usable for all lawful military tasks. This includes national security scenarios that may fall outside standard corporate restrictions. If Anthropic doesn't agree, cooperation on key defense projects could be suspended.

Meanwhile, Anthropic maintains strict ethical standards. The company prohibits using Claude in autonomous weapon systems and mass surveillance of citizens without human involvement. The startup emphasizes that adhering to these rules is critically important in AI development, particularly in situations affecting human lives.

At stake is a contract with the U.S. Department of Defense worth up to $200 million, signed last summer. It envisioned integrating Claude into several defense systems, but now faces potential collapse due to fundamental disagreements between the parties. The Pentagon hints that if terms aren't revised, cooperation could be terminated entirely.

The Anthropic case highlights the growing gap between technology companies' ethical ambitions and the practical requirements of the defense sector, raising the question of who should determine the boundaries for powerful AI applications—developers, the government, or legislators.