1
1
1
2
3
OpenAI, the prominent artificial intelligence research company, has finalized a significant agreement allowing the U.S. Department of Defense (DoD) to integrate its advanced AI models into the department’s highly sensitive classified network. The groundbreaking announcement came late Friday from OpenAI CEO Sam Altman, marking a pivotal moment for AI deployment in national security and setting a precedent for future collaborations between Silicon Valley and the Pentagon.
This development follows a high-profile and contentious standoff between the DoD – referred to as the Department of War under the current Trump administration – and Anthropic, a rival AI firm known for its focus on AI safety and ethics. The dispute centered on the Pentagon’s insistence that AI companies permit the use of their models for "all lawful purposes," a broad stipulation that Anthropic found problematic. In contrast, Anthropic sought to establish clear ethical boundaries, specifically drawing red lines around the use of its technology for mass domestic surveillance and the development or deployment of fully autonomous weapons systems. This ethical disagreement ultimately prevented Anthropic from reaching an agreement with the DoD, creating a vacuum that OpenAI has now filled.
Anthropic CEO Dario Amodei had articulated his company’s position in a lengthy statement released earlier on Thursday. While clarifying that Anthropic "never raised objections to particular military operations nor attempted to limit use of our technology in an ad hoc manner," Amodei emphasized a core belief that "in a narrow set of cases, we believe AI can undermine, rather than defend, democratic values." This stance underscored Anthropic’s commitment to responsible AI development, even when faced with significant governmental pressure and the potential loss of lucrative contracts. The company’s principled stand resonated within the tech community, leading to a notable show of solidarity. More than 60 employees from OpenAI itself, alongside over 300 employees from Google, signed an open letter this week, publicly urging their respective employers to support Anthropic’s ethical position in its negotiations with the Pentagon.
The failure of Anthropic and the Pentagon to reconcile their differing views led to swift and severe political repercussions for Anthropic. President Donald Trump publicly criticized the company, deriding them as "Leftwing nut jobs at Anthropic" in a social media post. Beyond mere rhetoric, the President’s post also included a directive to all federal agencies, ordering them to cease using Anthropic’s products following a six-month phase-out period. This move signaled a significant governmental pushback against any company perceived as hindering national defense objectives, irrespective of their ethical motivations.
Adding to Anthropic’s woes, Secretary of Defense Pete Hegseth escalated the situation further. In a separate social media post, Hegseth accused Anthropic of attempting to "seize veto power over the operational decisions of the United States military." He then announced a drastic measure, officially designating Anthropic as a "supply-chain risk." The implications of this designation are profound and far-reaching: "Effective immediately," Hegseth stated, "no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic." This move effectively blacklisted Anthropic from any direct or indirect involvement in the vast U.S. defense industrial base, threatening its future commercial viability within a crucial sector.
In response to these escalating pressures and designations, Anthropic issued a statement on Friday, indicating that it had "not yet received direct communication from the Department of War or the White House on the status of our negotiations." Despite the lack of formal notification, the company asserted its intention to vigorously "challenge any supply chain risk designation in court," signaling an impending legal battle against the federal government to defend its business and reputation.
Against this backdrop of intense ethical debate and corporate fallout, OpenAI’s Sam Altman made his announcement. What was particularly surprising and strategic was Altman’s claim, made in a post on X, that OpenAI’s new defense contract explicitly incorporates protections addressing the very same ethical issues that had become a flashpoint for Anthropic. Altman emphasized, "Two of our most important safety principles are prohibitions on domestic mass surveillance and human responsibility for the use of force, including for autonomous weapon systems." Crucially, he added, "The DoW agrees with these principles, reflects them in law and policy, and we put them into our agreement."
This statement suggests a potential breakthrough in reconciling advanced AI deployment with ethical safeguards within a military context. Altman further detailed the practical implementation of these safeguards, stating that OpenAI "will build technical safeguards to ensure our models behave as they should, which the DoW also wanted." To ensure robust oversight and safety, OpenAI committed to deploying its engineers directly with the Pentagon "to help with our models and to ensure their safety."
Altman did not stop at simply announcing his company’s deal. He also extended an olive branch, advocating for broader adoption of these terms across the industry. "We are asking the DoW to offer these same terms to all AI companies, which in our opinion we think everyone should be willing to accept," Altman stated. This sentiment was accompanied by a call for de-escalation: "We have expressed our strong desire to see things de-escalate away from legal and governmental actions and towards reasonable agreements." This positioning by OpenAI could be interpreted as an attempt to both highlight the reasonableness of its own agreement and potentially mitigate the severe repercussions faced by Anthropic.
Further details regarding the OpenAI agreement emerged from an all-hands meeting, as reported by Fortune’s Sharon Goldman. Altman informed OpenAI employees that the government had agreed to allow the company to develop its own "safety stack" – a set of technical controls and protocols designed to prevent misuse of its AI models. Perhaps most notably, Altman conveyed that "if the model refuses to do a task, then the government would not force OpenAI to make it do that task." This specific clause is a critical concession, granting OpenAI a significant degree of ethical control over its technology, directly addressing the "all lawful purposes" demand that had been a sticking point for Anthropic. It implies a mechanism for the AI model itself, guided by OpenAI’s safety principles, to decline requests deemed unethical or dangerous, without the company being compelled to override these safeguards.
The timing of Altman’s announcement was particularly striking, coming shortly before news broke that the U.S. and Israeli governments had initiated bombing campaigns in Iran, with President Trump simultaneously calling for the overthrow of the Iranian government. This immediate geopolitical context underscored the critical and often urgent real-world applications of advanced technologies like AI in military operations, highlighting the high stakes involved in these corporate-government defense agreements and the ethical frameworks that govern them.
The unfolding events, including the TechCrunch event scheduled for June 9, 2026, in Boston, MA, promise continued discussions and developments in this rapidly evolving intersection of technology, ethics, and national security.
Anthony Ha is TechCrunch’s weekend editor. Previously, he worked as a tech reporter at Adweek, a senior editor at VentureBeat, a local government reporter at the Hollister Free Lance, and vice president of content at a VC firm. He lives in New York City. You can contact or verify outreach from Anthony by emailing [email protected].