Exclusive-Pentagon clashes with Anthropic over military AI use, sources say
FILE PHOTO: Anthropic logo is seen in this illustration taken May 20, 2024. REUTERS/Dado Ruvic/Illustration/File Photo
WASHINGTON/SAN FRANCISCO, Jan 29 : The Pentagon is at odds with artificial-intelligence developer Anthropic over safeguards that would prevent the government from deploying its technology to target weapons autonomously and conduct U.S. domestic surveillance, three people familiar with the matter told Reuters.
The discussions represent an early test case for whether Silicon Valley, in Washington’s good graces after years of tensions, can sway how U.S. military and intelligence personnel deploy increasingly powerful AI on the battlefield.
After extensive talks under a contract worth up to $200 million, the U.S. Department of Defense and Anthropic are at a standstill, six people familiar with the matter said, on condition of anonymity.
The company's position on how its AI tools can be used has intensified disagreements between it and the Trump administration, the details of which have not been previously reported.
A spokesperson for the Defense Department, which the Trump administration renamed the Department of War, did not immediately respond to requests for comment.
Anthropic said its AI is "extensively used for national security missions by the U.S. government and we are in productive discussions with the Department of War about ways to continue that work."
The spat, which could threaten Anthropic's Pentagon business, comes at a delicate time for the company.
The San Francisco-based startup is preparing for an eventual public offering. It also has spent significant resources courting U.S. national security business and sought an active role in shaping government AI policy.
Anthropic is one of a few major AI developers that were awarded contracts by the Pentagon last year. Others were Alphabet's Google, Elon Musk's xAI and OpenAI.
WEAPONS TARGETING
In its discussions with government officials, Anthropic representatives raised concerns that its tools could be used to spy on Americans or assist weapons targeting without sufficient human oversight, some of the sources told Reuters.
The Pentagon has bristled at the company's guidelines. In line with a January 9 department memo on AI strategy, Pentagon officials have argued they should be able to deploy commercial AI technology regardless of companies' usage policies, so long as they comply with U.S. law, sources said.
Still, Pentagon officials would likely need Anthropic’s cooperation moving forward. Its models are trained to avoid taking steps that might lead to harm, and Anthropic staffers would be the ones to retool its AI for the Pentagon, some of the sources said.
Anthropic's caution has drawn conflict with the Trump administration before, Semafor has reported.
In an essay on his personal blog, Anthropic CEO Dario Amodei warned this week that AI should support national defense "in all ways except those which would make us more like our autocratic adversaries."
Amodei was among Anthropic's co-founders critical of fatal shootings of U.S. citizens protesting immigration enforcement actions in Minneapolis, which he described as a "horror" in a post on X.
The deaths have compounded concern among some in Silicon Valley about government use of their tools for potential violence.
(Reporting By Deepa Seetharaman and Jeffrey Dastin in San Francisco and David Jeans in Washington, Editing by Kenneth Li, Franklin Paul, Anna Driver and Chris Reese)