Anthropic vs US Government: Military Use of AI
- Meredith Burton

- 11 minutes ago
- 4 min read
The motto of Silicon Valley is “Move Fast and Break Things,” but when it comes to Artificial Intelligence, the potential of this technology could lead to some pretty devastating consequences. What the United States has planned for the future of AI in the military is an important topic to understand and what will be its global repercussions. We are already beginning to see those implications in the Middle East as well as the discussions between the US government and AI tech companies. Some of the AI companies want more clarity, while the US government is uneasy about being questioned. The Silicon Valley motto may have turned around for some of these companies.
A few days prior to the initial attacks on Iran, the AI tech company Anthropic told the US Department of War that they are concerned about the latest models displaying “some vulnerability to being used in "heinous crimes," including the development of chemical weapons” and that “Claude Opus 4.5 and 4.6 showed elevated susceptibility to harmful misuse" in certain computer use settings.” Anthropic prides itself as a tech company that is committed to safety by convincing Americans that its powerful Claude chatbot is more trustworthy than the competition. When reports regarding the U.S. military using Claude for the Venezuela operation to kidnap President Maduro, Anthropic executives grew uneasy about the relationship with the US military. Anthropic advised the Pentagon to not use Claude for mass surveillance or autonomous weapons. Zak Kallenborn, an expert on AI in warfare stated that “Autonomous weapons represent a quite broad class of weapons.” He also noted that “Autonomous defense turrets at sea have been used for decades; autonomous nuclear weapons could end humanity.” Anthropic was also concerned how the technology would be used in other ways and wanted to establish guardrails on how it could be used. Those concerns were outlined in two restrictions set in Anthropic’s contract: (1) The government could not use its technology for mass surveillance of U.S. citizens and (2) it could not use Claude with autonomous weapons that kill without human involvement.
The response from the US government was swift and said that it refuses to let a private company put restrictions on how the U.S. military uses its product. This was also followed up by responses from Secretary of Defense Pete Hegseth and President Donald Trump, who designated the AI company Anthropic as a supply chain risk to national security. They also disparaged the firm via social media stating that Anthropic is a radical “woke company” run by “leftwing nut jobs.” A few hours after these remarks, the U.S. military attacked Iran, reportedly using that very same company’s tools to help carry out a series of direct strikes in Tehran and across Iran. Claude has been exclusive as well as instrumental to the Pentagon when it comes to America’s war plans, for planning scenarios, intelligence briefings and even target identification. When Anthropic was blacklisted by the Defense Department, other AI tech companies stepped in.
The day of the attacks in Iran, OpenAI CEO Sam Altman wrote on X that “Tonight, we reached an agreement with the Department of War to deploy our models in their classified network,” adding that the Defense Department “displayed a deep respect for safety and a desire to partner to achieve the best possible outcome.” The backlash of OpenAI seemingly caving to the Pentagon’s demands also brought an amendment to the “deal to provide artificial intelligence technologies for the Defense Department’s classified systems now included additional protections to prevent its technology from being used in mass surveillance of Americans. OpenAI’s chief executive, Sam Altman, said in a social media post that “It’s critical to protect the civil liberties of Americans, and there was so much focus on this, that we wanted to make this point especially clear.”
In the meantime, Anthropic is suing the Trump administration for designating the artificial intelligence company a risk to the Defense Department’s supply chain. The lawsuit was filed in the U.S. District Court for the Northern District of California, where Anthropic accuses the government of violating its First Amendment rights, exceeding the legal scope of the supply-chain risk statute, and circumventing the process through which the president and cabinet secretaries are allowed to cancel government contracts.
Recognising the civil liberties of Americans is important to this issue but the impact of Artificial Intelligence used in military scenarios is also important to understand. Coda Story in their newsletter “Coda Currents” outlined the impact of the usage in AI. They wrote
“Between them, the U.S. and Israel struck more than 2,000 targets within the first 24 hours of the war. For even the largest militaries, it is an almost impossible task to identify, select and then precisely locate such a high volume of targets. But the U.S. military had some help. Claude, the “next generation AI assistant” built by Anthropic, was used in the planning of ‘Operation Epic Fury’. This, even though the Department of War recently labeled Anthropic a “supply chain risk”.”
It is also possible with this volume of targets, the US military is unable to manage all of the attacks using people power. There is some speculation that the Iranian girls’ school was hit by a US aerial bombardment, which could have been guided by AI instruments. The US military reports that they are “using a “variety” of artificial intelligence (AI) tools in the war with Iran amid growing concerns over mounting civilian casualties in the conflict” and “Humans will always make final decisions on what to shoot and what not to shoot and when to shoot, but advanced AI tools can turn processes that used to take hours and sometimes even days into seconds.” Still, other countries are concerned about using AI for military purposes. Chinese Defense Ministry spokesperson Jiang Bin said in a statement that “The unrestricted application of AI by the military, using AI as a tool to violate the sovereignty of other nations … and giving algorithms the power to determine life and death not only erode ethical restraints and accountability in wars, but also risk technological runaway.”
Maybe it is all of the dystopian novels that I have read or the lack of clarity from governments on how AI will be used in military scenarios, but letting the robots be in charge does make me wonder if the new motto should be “Stop to Question and Wonder if it is Ethical.”




Comments