Popular Posts

Anthropic Challenges Pentagon’s "National Security Risk" Claim with Sworn Declarations Ahead of Crucial Court Hearing

Anthropic, a prominent artificial intelligence company, has intensified its legal battle against the U.S. Department of Defense, submitting two sworn declarations to a California federal court late Friday afternoon. These filings directly challenge the Pentagon’s assertion that Anthropic poses an "unacceptable risk to national security," arguing that the government’s case is built upon fundamental technical misunderstandings and allegations that were conspicuously absent from months of prior negotiations. The declarations were filed concurrently with Anthropic’s reply brief in its lawsuit against the Department of Defense, setting the stage for a critical hearing scheduled for this coming Tuesday, March 24, before Judge Rita Lin in San Francisco.

The contentious dispute between the tech innovator and the nation’s defense apparatus originated in late February. It was then that President Trump and Defense Secretary Pete Hegseth publicly announced a severing of ties with Anthropic. This decisive action followed the company’s refusal to grant the military unfettered and unrestricted use of its advanced AI technology, a stance rooted in Anthropic’s publicly stated commitment to AI safety and ethical guidelines. This refusal ignited a firestorm, culminating in the Pentagon’s unprecedented supply-chain risk designation against an American company, a move Anthropic contends is retaliatory and infringes upon its First Amendment rights.

Central to Anthropic’s rebuttal are the sworn testimonies of two key company officials: Sarah Heck, Anthropic’s Head of Policy, and Thiyagu Ramasamy, the company’s Head of Public Sector. Their declarations provide detailed accounts and technical explanations designed to dismantle the Pentagon’s claims point by point, offering a starkly different narrative of the negotiations and the capabilities of Anthropic’s AI systems.

Sarah Heck, a seasoned professional with a distinguished background, brings significant credibility to her declaration. A former National Security Council official who served at the White House during the Obama administration before transitioning to roles at Stripe and then Anthropic, she currently oversees the company’s critical government relationships and policy work. Her firsthand experience extends to the pivotal February 24 meeting, where Anthropic CEO Dario Amodei met with Defense Secretary Hegseth and the Pentagon’s Under Secretary Emil Michael.

In her declaration, Heck directly confronts what she identifies as a fundamental misrepresentation in the government’s court filings: the claim that Anthropic sought an "approval role" or veto power over military operations. Heck unequivocally states, "At no time during Anthropic’s negotiations with the Department did I or any other Anthropic employee state that the company wanted that kind of role." This assertion challenges a core tenet of the Pentagon’s argument, suggesting a deliberate mischaracterization of Anthropic’s position. Furthermore, Heck’s declaration highlights another critical procedural issue: the Pentagon’s concern about Anthropic potentially disabling or altering its technology mid-operation. She asserts that this specific concern was never raised during the extensive negotiation period. Instead, it surfaced for the first time within the government’s court filings, effectively denying Anthropic any prior opportunity to address or respond to this particular allegation.

Heck’s declaration also meticulously details a timeline of events that raises serious questions about the Pentagon’s motivations. She points out that on March 4—merely a day after the Pentagon formally finalized its supply-chain risk designation against Anthropic—Under Secretary Michael sent an email to CEO Dario Amodei. In this email, Michael reportedly indicated that the two sides were "very close" on the very issues that the government now cites as definitive evidence of Anthropic posing a national security threat: its positions on autonomous weapons and the mass surveillance of Americans. This email, attached as an exhibit to Heck’s declaration, stands in stark contrast to Michael’s subsequent public statements. On March 5, Amodei released a statement indicating that Anthropic had been engaged in "productive conversations" with the Pentagon. The very next day, Michael took to X (formerly Twitter) to declare that "there is no active Department of War negotiation with Anthropic." A week later, he reiterated this stance to CNBC, stating there was "no chance" of renewed talks. Heck’s testimony implicitly asks a profound question: If Anthropic’s stance on autonomous weapons and mass surveillance truly rendered it a national security threat, why was a senior Pentagon official suggesting they were "very close" to alignment on those same issues immediately after the designation was finalized? This timeline strongly suggests the possibility, though not explicitly stated by Heck, that the supply-chain risk designation may have been utilized as a bargaining chip rather than a purely objective national security assessment.

Complementing Heck’s policy insights, Thiyagu Ramasamy brings invaluable technical expertise to Anthropic’s defense. Before joining Anthropic in 2025, Ramasamy dedicated six years at Amazon Web Services (AWS), where he managed complex AI deployments for a diverse range of government customers, including those operating within classified environments. At Anthropic, he is widely credited with establishing and leading the team responsible for integrating the company’s Claude AI models into sensitive national security and defense settings, a testament to his expertise demonstrated by the significant $200 million contract with the Pentagon announced last summer.

Ramasamy’s declaration directly addresses the government’s technical claims, particularly the assertion that Anthropic could theoretically interfere with military operations by disabling its technology or altering its behavior. Ramasamy emphatically states that this is not technically feasible. He explains that once Anthropic’s Claude models are deployed within a government-secured, "air-gapped" system operated by a third-party contractor, Anthropic completely loses access. In such an environment, there is no remote kill switch, no backdoor, and absolutely no mechanism for Anthropic to push unauthorized updates. He characterizes any notion of an "operational veto" as a fiction, clarifying that any modification or update to the deployed model would necessitate the Pentagon’s explicit approval and active intervention to install. Furthermore, Ramasamy assures the court that Anthropic lacks the capability to even observe what government users are inputting into the system, let alone extract any sensitive data from it, reinforcing the security and privacy protocols in place.

Ramasamy also challenges the government’s concern regarding Anthropic’s hiring of foreign nationals as a potential security risk. He highlights that all Anthropic employees, including foreign nationals, undergo rigorous U.S. government security clearance vetting. This process, he notes, is the same comprehensive background check required for individuals granted access to classified information. In a significant point, Ramasamy states in his declaration that "to my knowledge," Anthropic stands as the sole AI company where personnel who have successfully cleared this vetting process are actively involved in building the AI models specifically designed to operate within classified environments. This detail underscores Anthropic’s commitment to adhering to stringent national security standards.

Anthropic’s lawsuit posits that the supply-chain risk designation—a measure typically reserved for foreign entities or companies with direct ties to adversarial nations, and notably the first ever applied to an American company—constitutes an act of government retaliation. The company argues that this designation is a direct consequence of its publicly articulated views on AI safety and ethics, thereby violating its First Amendment rights to free speech.

The government, in a comprehensive 40-page filing earlier this week, vehemently rejected this framing. It contended that Anthropic’s refusal to permit all lawful military uses of its technology was a purely commercial business decision, and therefore not protected speech under the First Amendment. The Pentagon maintains that the designation was a straightforward, necessary national security determination, and explicitly denies that it was intended as punishment for the company’s views or positions on AI ethics. This fundamental disagreement over the nature of Anthropic’s refusal—whether it’s a business decision or protected speech—is at the heart of the legal battle.

The unfolding legal drama between Anthropic and the Pentagon is not merely a contractual dispute; it represents a significant test case for the burgeoning AI industry, the future of government technology procurement, and the delicate balance between national security imperatives and corporate ethical stances, particularly concerning advanced technologies. The outcome of this lawsuit could set precedents for how AI companies interact with defense agencies, defining the boundaries of ethical AI development and its application in military contexts. The case also spotlights the challenges of integrating rapidly evolving technologies into national defense frameworks while navigating complex legal and ethical landscapes.

This high-stakes legal confrontation will undoubtedly be a focal point for industry observers, legal experts, and policymakers alike. The TechCrunch event, scheduled for October 13-15, 2026, in San Francisco, CA, provides a prominent platform where such critical issues regarding technology, ethics, and national security are typically discussed, offering a future venue for further discourse on these evolving challenges.

Reporting on these intricate developments, Connie Loizos, a veteran journalist who has covered Silicon Valley since the late ’90s, brings extensive experience to the table. Loizos, who previously served as the Silicon Valley Editor of TechCrunch, was appointed Editor in Chief and General Manager of TechCrunch in September 2023. She is also the founder of StrictlyVC, a widely respected daily e-newsletter and lecture series that was acquired by Yahoo in August 2023 and now operates as a sub-brand of TechCrunch. Her deep understanding of the tech landscape and its intersection with policy and finance provides valuable context to her coverage of this significant legal battle.

Leave a Reply

Your email address will not be published. Required fields are marked *