The Pentagon believed it could offer Anthropic an off-ramp from the supply chain risk designation. OpenAI CEO Sam Altman found it strange to be working so hard to "save" a rival whose CEO had, in his view, spent years trying to destroy OpenAI. These two sentences from internal Slack messages seen by Axios capture the contradictions at the heart of Silicon Valley's AI ethics theater — where companies perform moral stands until the contracts get too lucrative to refuse.
The messages trace Altman's thinking from February 24 through March 2 as his competitor Anthropic faced potential Pentagon retaliation for refusing to allow its AI systems to be used in weapons targeting. While publicly maintaining that OpenAI shared Anthropic's ethical red lines, Altman was privately negotiating to secure the very contract his rival had just lost on principle.
This isn't just corporate opportunism. It reveals how the AI industry's supposed commitment to safety and ethics crumbles when faced with the twin pressures of government contracts and competitive advantage. The companies building what they claim could become artificial general intelligence — technology they warn could pose existential risks — abandon their stated principles the moment the Pentagon threatens their market position.
According to Axios, the standoff had been public for 10 days when Altman first engaged. The Pentagon was considering cutting ties with Anthropic over the company's refusal to allow military use of its AI models. Defense Under Secretary Emil Michael, who was leading negotiations, called Altman on February 24. By the next day, the two sides were exchanging draft contract language.
The speed reveals what was really at stake. This wasn't a careful ethical deliberation about AI's role in warfare. It was a business opportunity dressed up as principled intervention. Altman told employees he wanted to help de-escalate while making clear he still hoped to strike his own deal with the Pentagon. He acknowledged the optics might not look good but stressed he was committed to acting on principle rather than appearances.
The principle, apparently, was that OpenAI deserved the contract more than Anthropic did. By February 27, as negotiations between the Pentagon and Anthropic deteriorated, Altman told core staff that the Pentagon believed Anthropic CEO Dario Amodei was playing to the press. The implication: Anthropic's ethical stance was performative, while OpenAI's willingness to work with the military was pragmatic.
This framing allows AI companies to have it both ways. They can claim to care deeply about AI safety and alignment — attracting top researchers and favorable press coverage — while simultaneously building the exact systems that military planners want for targeting and surveillance. When one company takes a genuine stand, as Anthropic apparently did, competitors swoop in to capture the revenue while painting the principled position as naive grandstanding.
The broader pattern extends beyond this single incident. As Tinsel News has reported, companies like Palantir and Anduril have built their entire business models around military AI contracts, yet still position themselves as innovative tech companies rather than defense contractors. The AI industry wants the cultural cache of Silicon Valley disruption without the moral accountability that comes with building weapons systems.
Altman's claim that he was trying to "save" Anthropic deserves particular scrutiny. Save them from what, exactly? From the consequences of refusing to build AI systems that could be used to kill people? From missing out on lucrative defense contracts? The paternalistic framing suggests that taking an ethical position against military AI is something that requires rescue, rather than respect.
The timing of the messages also matters. On February 26, Altman sent an all-staff message saying OpenAI shared Anthropic's red lines. By February 27, he was expressing frustration about having to save a rival whose CEO had tried to "destroy" OpenAI. By that night, OpenAI had secured the Pentagon deal. The entire arc took less than 72 hours — hardly enough time for the careful ethical consideration these companies claim guides their decision-making.
What's most revealing is how the messages show Altman thinking through the public relations challenge while pursuing the contract. He told employees he was asking the government to extend the same terms to other AI companies to de-escalate. This positioning allows OpenAI to claim it was acting in the industry's collective interest, not just its own competitive advantage. But if the terms were truly acceptable from an ethical standpoint, why had Anthropic refused them in the first place?
The Pentagon's role in this drama also deserves examination. By threatening Anthropic with a supply chain risk designation — essentially blacklisting them from government contracts — the Defense Department sent a clear message to the entire AI industry: cooperate with military applications or face economic retaliation. This pattern of government contracts shaping tech behavior isn't how ethical frameworks are supposed to work in industries developing potentially transformative technologies.
The incident illuminates a fundamental tension in AI development. These companies simultaneously claim their technology could pose existential risks to humanity while racing to deploy it in military contexts where the immediate risks to human life are concrete and measurable. They perform elaborate safety theater — establishing ethics boards, publishing alignment research, warning about AGI risks — while their business development teams negotiate contracts for systems designed to enhance killing efficiency.
For all the talk of AI safety and alignment in Silicon Valley, this episode shows what really drives decision-making: competitive pressure and revenue opportunity. When Anthropic took what appears to be a principled stand against military use of its technology, the response from its primary competitor wasn't solidarity or even respectful disagreement. It was opportunism wrapped in the language of assistance. The same dynamic plays out across the defense industry, where Pentagon war budgets create irresistible financial gravity for any company with relevant technology.
The AI industry wants us to trust it with developing artificial general intelligence — technology its leaders claim could fundamentally transform or even threaten human civilization. But if these companies abandon their stated principles the moment a defense contract becomes available, what does that tell us about their commitment to safety when even larger stakes emerge? The Slack messages reveal more than Altman likely intended: they show an industry where ethics are negotiable, principles are performative, and the only real red line is missing out on revenue.