Court blocks US from listing Anthropic as supply chain risk
A U.S. federal court has temporarily blocked the government from adding Anthropic to its supply chain risk list, suspending restrictions on AI technology use.
A U.S. federal court has temporarily blocked the government from adding Anthropic to its supply chain risk list, suspending restrictions on AI technology use.
whitehouse.gov
A U.S. federal court has temporarily blocked the government from adding Anthropic to its "supply chain risk" list, marking a significant turn in the high-profile conflict over artificial intelligence use. Judge Rita Lin granted the company's request for a preliminary injunction, effectively suspending restrictions on federal agencies using its technology.
The dispute erupted after Anthropic refused to alter contract terms that would have allowed its AI solutions to be used for mass surveillance and autonomous weapons development. In response, the administration ordered federal departments to stop using the company's products, including the Claude model, and the Pentagon officially designated Anthropic as a supply chain threat—a label typically reserved for foreign competitors.
The court found these actions questionable. Its ruling notes that the government's measures appear to be an attempt to punish the company for public criticism. The judge emphasized that classifying Anthropic as a potential threat contradicts the law, appears arbitrary, and may violate First Amendment rights. It was separately noted that the company's disagreement with the government's stance cannot justify effectively branding it as a dangerous entity.
Anthropic stated it appreciates the court's prompt decision and intends to continue constructive engagement with the government while developing safe AI technologies. However, the legal proceedings are ongoing, and a final ruling has not yet been issued, though the court has already indicated a strong likelihood that the company's position will be upheld.