This is not just a legal story
A U.S. appeals court has refused to block the Pentagon’s decision to blacklist Anthropic, at least for now. On the surface, it looks like a procedural legal update. But underneath, this is one of the clearest signals yet that the AI race is no longer just about technology. It is about power, control, and who gets to decide how AI is used in the real world. The court’s decision does not settle the case. It simply means the Pentagon’s “supply chain risk” label stays in place while the legal fight continues. But even this temporary ruling carries weight, because it reinforces how much influence governments now have over the future direction of AI companies.
What actually happened
Anthropic has been locked in a growing dispute with the U.S. Department of Defense after refusing to loosen certain safeguards on its AI systems. The company drew a line around two key areas: mass domestic surveillance and fully autonomous weapons.
The Pentagon responded by labeling Anthropic a national security “supply chain risk,” a designation that effectively blocks it from defense contracts and could ripple across other government work.
Anthropic pushed back hard. The company argued the move was retaliatory and violated its constitutional rights, including free speech and due process. It warned the blacklisting could cause major financial and reputational damage, potentially costing billions in future revenue.
The appeals court acknowledged potential harm but declined to intervene immediately, choosing instead to defer to national security concerns while the broader case plays out.
Conflicting rulings and growing uncertainty
What makes this situation even more complex is that another U.S. court previously ruled in Anthropic’s favor. A federal judge in California temporarily blocked a broader federal ban, allowing some government use of the company’s technology to continue.
Now, with two courts moving in different directions, Anthropic is stuck in legal limbo.
That uncertainty is not just a legal issue. It creates real business risk. Companies, governments, and partners now have to decide whether to build on technology that could be restricted, banned, or cleared depending on how the courts ultimately rule.
The real issue: who controls AI use
At the heart of this story is a deeper conflict that is starting to define the AI era.
Anthropic is trying to set boundaries on how its technology can be used, particularly in military contexts. The Pentagon is pushing back, arguing that it should be able to use AI tools within the limits of existing law, without being constrained by company-imposed rules.