The Pentagon declared Anthropic a supply chain risk and ordered defense contractors to remove its software from military workflows. The legal fight over that designation is ongoing. And the White House is simultaneously in active negotiations to give federal agencies access to Mythos Preview — Anthropic's most powerful model, which the company has described as having capabilities dangerous enough to warrant restricted release. The two positions are not in tension by accident. They are the architecture of a government that cannot agree on what it wants from AI, and is trying to have it both ways.
According to Axios, the Office of Management and Budget sent agencies an email — first reported by Bloomberg — stating it was looking into whether they could use Mythos. Two sources familiar with the discussions told Axios that agencies may get access within weeks. Anthropic is not rolling Mythos out to the general public. It is offering access to a select group of organizations so they can assess the model's cyber capabilities and strengthen their defenses. Some federal agencies want in. The White House is negotiating the terms.
The Pentagon declared Anthropic a "supply chain risk" and barred companies working on military contracts from using its software. Anthropic is currently suing over that designation. The ruling applies only to Pentagon contracts — civilian agencies like Energy, Treasury, and the intelligence community can still work with Anthropic. The White House negotiations concern those civilian agencies, not the military. But the split creates a situation where the same AI is simultaneously considered a security threat and a security asset, depending on which federal building you're standing in.
The original thesis the Axios report does not quite name: this is not a policy contradiction. It is a procurement competition dressed up as a security debate. The administration officials who oppose Anthropic are not primarily worried about its AI's danger to the United States — they are worried about losing control over which AI companies get federal contracts worth billions of dollars. The officials who want Mythos access are not ignoring the danger — they are calculating that the threat from adversaries outweighs the risk from the vendor. What looks like incoherence is actually two different bureaucratic factions fighting over the same checkbook.
One administration official quoted by Axios made the logic explicit: "All the intel agencies use Anthropic. Every agency except War wants to. That's because Anthropic doesn't want to kill people and War's position is 'don't tell us what the f*** to do.' But if you're the Department of Energy, you don't give a f*** about that. You're worried about the Chinese attacking the energy grid. So you want Anthropic." That is not a security assessment. That is a jurisdictional complaint. The Department of Defense wants AI it can deploy for lethal autonomous systems without the vendor's ethical restrictions. Anthropic will not agree to that. So the Pentagon labeled the company a risk — and the rest of the government, which does not need to bomb anyone, kept buying.
Anthropic's official position is that its models cannot be used for mass surveillance or to develop fully autonomous weapons. The Pentagon's counterargument, per Axios, is that those definitions are too vague and that it needs assurances it can use AI for "all lawful purposes." This framing is worth pausing on. "All lawful purposes" in a military context includes targeting systems, surveillance infrastructure, and decision-support tools that sit one policy change away from autonomous lethal action. Anthropic's restrictions are not philosophical — they are contractual limits on what the model can be used for. The Pentagon's objection is that it does not want those limits. The supply chain risk designation followed from that refusal, not from any documented security flaw in Anthropic's technology.
A second administration official, also quoted by Axios, accused Anthropic of using "fear tactics" by warning about Mythos's hacking capabilities. "They're using this Mythos cyber weapon to find friendly ears in the government," the official said. "They're succeeding." This is a remarkable admission. The official is confirming that Anthropic's strategy — circulate warnings about what your AI can do, create urgency, generate demand among agencies that want to get there before adversaries do — is working. The company is not hiding the danger. It is marketing it. And the federal government is buying.
This is not the first time the federal government has approved technology it simultaneously flagged as dangerous. As we noted in our investigation into Microsoft's cloud approval despite documented security failures, federal procurement decisions routinely run ahead of the security reviews that are supposed to govern them. The pattern is consistent: a technology is deemed essential before it is deemed safe, and the safety review becomes a formality rather than a gate. Mythos is following that same track, except the danger is not hypothetical. Anthropic's own warnings about the model's cyber capabilities — the ones an administration official dismissed as "fear tactics" — are based on the company's own internal assessments of what the model can do.
The civilian agencies driving the demand are not wrong about their threat environment. The Department of Energy oversees the nuclear weapons stockpile and the electrical grid. The Department of the Treasury oversees financial system infrastructure. These are genuine targets for state-sponsored cyberattacks, and the argument that a sophisticated AI tool could help defend them is not implausible. But "we need this to defend critical infrastructure" and "this tool could supercharge attacks on critical infrastructure" are both true at the same time, about the same model, and the federal government has not built the oversight framework to manage that duality. The discussions with Anthropic are about access terms. There is no public evidence of parallel discussions about what happens when a model with documented offensive cyber capabilities is deployed across agencies with varying security cultures and IT infrastructure.
The broader context is a federal AI procurement environment that is moving faster than any governance structure can track. Our coverage of Mythos's cyber capabilities and the emergency meetings they triggered at Treasury and the Federal Reserve documented how the model's release forced financial regulators into reactive mode. The White House negotiations are the next phase of that same dynamic: agencies that were alarmed by what Mythos could do to them now want access to what it could do for them. The threat and the tool are identical. The only variable is who controls the interface.
One administration official quoted by Axios put the internal split with characteristic bluntness: "There's progress with the White House. There's not progress with [the Department of] War." That sentence contains the entire story. The administration is not resolving a policy question. It is managing a procurement fight between a defense establishment that wants AI without ethical restrictions and a civilian government that wants AI without asking too many questions about what it does. Anthropic is not a passive actor in this. The company chose a restricted rollout strategy that creates scarcity and urgency. It issued public warnings about Mythos's capabilities that function as both genuine safety disclosures and effective marketing. It is in litigation with the Pentagon while negotiating with the White House. These are not contradictions. They are bargaining chips.
The question the Axios report does not ask — and the one that matters most — is what oversight structure will govern Mythos's use once civilian agencies get access. The Pentagon's objections, however self-interested, at least force a conversation about what AI should and should not be allowed to do in a national security context. The civilian agencies pursuing access have no equivalent constraint. The Department of Energy can deploy Mythos to defend the grid without triggering the ethical review that a weapons application would require. The Department of the Treasury can use it to model financial system vulnerabilities without any public accountability for how that capability might be repurposed. The supply chain risk designation the Pentagon applied was a blunt instrument wielded for the wrong reasons. But removing it from the equation entirely, which is what civilian access without equivalent oversight would accomplish, does not make the AI safer. It just makes the accountability gap larger.
The White House is close to giving federal agencies access to an AI its own officials have described as a cyber weapon. The framework for what those agencies can do with it does not yet exist. As our tracker on government AI regulation has documented, the gap between deployment speed and oversight capacity is not narrowing. Anthropic will get its federal contracts. The agencies will get their tool. The public will get the consequences of both, without having been consulted on either.