Key Takeaways:
- The D.C. Circuit denied Anthropic’s emergency stay on April 8, 2026, allowing the Pentagon’s blacklist of Claude AI to remain in force.
- Pentagon supply chain risk designation affects major DoD contractors, including Amazon, Microsoft, and Palantir.
- Expedited oral arguments are set for May 19, 2026, a ruling that could reshape U.S. government AI procurement policy.
Appeals Court Rules DoD Can Keep Claude AI Blacklist During Litigation
The U.S. Court of Appeals for the D.C. Circuit, in a four-page order, denied the San Francisco-based AI company’s emergency motion to pause a “supply chain risk” designation issued by Defense Secretary Pete Hegseth. The ruling allows the Department of Defense to continue barring contractors from using Claude while litigation proceeds. Oral arguments were expedited to May 19, 2026.
The panel acknowledged Anthropic would “likely suffer some degree of irreparable harm,” citing both financial and reputational damage. Judges Gregory Katsas and Neomi Rao, both Trump appointees, concluded the balance of equities favored the government, citing judicial management of how the Pentagon secures AI technology “during an active military conflict.”
The designation itself traces to a breakdown in negotiations between Anthropic and Pentagon officials in late February 2026. At issue were two restrictions in Anthropic’s terms of service: a ban on fully autonomous weapons systems, including armed drone swarms operating without human oversight, and a prohibition on mass surveillance of U.S. citizens.
Emil Michael, Undersecretary for Research and Engineering and the Pentagon’s chief technology officer, called those restrictions “irrational obstacles” to military competitiveness, particularly against China. Officials cited programs such as the Golden Dome missile defense initiative and the need for rapid response capabilities against hypersonic threats.
Anthropic offered limited, case-by-case exceptions but refused to eliminate the core safety guardrails, citing reliability concerns with current AI for high-stakes autonomous decisions. Talks collapsed. President Trump then directed all federal agencies to stop using Anthropic’s technology, with a six-month phase-out for existing deployments.
Hegseth’s supply chain risk designation followed, an action typically applied to foreign entities such as Huawei. The label required contractors, including Amazon, Microsoft, and Palantir, to cease using Claude in any DoD-tied work. Anthropic called the move an “unlawful campaign of retaliation” for its refusal to let the government override its AI safety policies.
Anthropic filed parallel lawsuits in March 2026. One was filed in the U.S. District Court for the Northern District of California; the other targeted the specific procurement statute governing supply-chain risk in the D.C. Circuit.
On March 26, U.S. District Judge Rita F. Lin granted Anthropic a preliminary injunction in the California case. She ruled that the administration’s actions appeared more punitive than protective, lacked sufficient statutory justification, and overstepped authority. That order temporarily lifted enforcement of the designation, allowing government and contractor use of Claude to continue pending full litigation. The Trump administration appealed to the Ninth Circuit.
The April 8 D.C. Circuit decision runs counter to Lin’s ruling, creating a legal tension over whether the designation is currently enforceable. The two courts are reviewing different statutory frameworks, which explains the procedural split.
Anthropic said in a statement that it remains confident in its position. “We’re grateful the court recognized these issues need to be resolved quickly and remain confident the courts will ultimately agree that these supply chain designations were unlawful,” the company said.
Industry observers flagged the case as a warning sign for U.S. AI development. Matt Schruers, CEO of the Computer and Communications Industry Association, said the Pentagon’s actions and the D.C. Circuit ruling “create substantial business uncertainty at a time when U.S. companies are competing with global counterparts to lead in AI.”
The case now moves toward the expedited May 19 oral argument in the D.C. Circuit, with the Ninth Circuit appeal still pending. The outcome will likely define the limits of federal power to designate domestic AI firms as national security risks and determine how far the government can go in pressuring private companies to alter their AI safety policies.
news.bitcoin.com
#Federal #Judges #Deny #Anthropic #Relief #Claude #Military #Ban #Set #Oral #Arguments #Bitcoin #News





