Palantir’s (PLTR) role in the U.S. military most likely remains solid despite the recent ethical feud between the Pentagon and the AI startup Anthropic (ANTHRO), which has partnered with the Alex Karp-led firm since 2024, according to analysts.
Shares of Palantir had edged up 4% during early market action on Monday.
After weeks of talks, U.S. President Donald Trump ordered federal agencies to stop using Anthropic’s AI tools, canceling more than $200M in contracts on Friday. U.S. Secretary of War Pete Hegseth described the company as a national security “supply chain risk,” an unusual designation for a U.S. firm.
“In conjunction with the President’s directive for the Federal Government to cease all use of Anthropic’s technology, I am directing the Department of War to designate Anthropic a Supply-Chain Risk to National Security,” Hegseth said in a post on X. “Effective immediately, no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic. Anthropic will continue to provide the Department of War its services for a period of no more than six months to allow for a seamless transition to a better and more patriotic service.”
The fallout occurred over a dispute with Anthropic, which refused, at this point, to allow its AI to operate fully autonomous weapons without human oversight or conduct domestic mass surveillance.
Anthropic’s role in Operation Epic Fury
Anthropic played a role in the strikes on Iran, dubbed Operation Epic Fury, over the weekend.
“The U.S. military was using Anthropic models to attack Iran, but what might be missed in the current info chaos is the fact that Claude, or whatever model might have been used, can’t operate on military-sensitive data as a standalone tool,” said Seeking Alpha analyst Nova Capital on Monday. “There should have been an entire ecosystem for AI to work effectively for this kind of task, and this is where I think Palantir’s ontology helped… Claude couldn’t operate without Palantir, which is called the ‘brain of the battlefield’… I’m certain that it was used through Palantir’s systems that ‘host’ Claude since late 2024.”
Seeking Alpha analyst Julia Ostian said the removal of Claude from Palantir’s systems will take some effort, and opens the door for OpenAI (OPENAI) and its primary backer Microsoft (MSFT). OpenAI signed a deal with the U.S. Department of War to deploy its models within a classified government network in the wake of the Anthropic fallout.
“Calling Anthropic a ‘Supply-Chain Risk’ is an odd move in my opinion, as usually it applies to foreign companies suspected of espionage,” Ostian said. “For Palantir, this is a direct hit since they were the primary gate to bringing Anthropic’s Claude into classified networks, and they’ll now have to redo and rethink ways of working, most likely switching to an alternative like OpenAI or xAI (X.AI).”
“Alphabet (GOOG)(GOOGL) and Amazon (AMZN) are also in a very difficult position because they’ve invested billions into Anthropic as both investors and cloud hosts, and now they have to figure out how to get rid of those investments, or of any association with Anthropic, to protect their own multi-billion-dollar government cloud contracts,” Ostian added. “For Microsoft, though, it’s a great piece of news, as Azure and OpenAI become the only options left for the Pentagon’s transition.”
Anthropic rebuts Hegseth’s claim regarding commercial partners
Although Hegseth said companies that contract with the U.S. military cannot even hold commercial partnerships with Anthropic, the startup said that is not true.
“Secretary Hegseth has implied this designation would restrict anyone who does business with the military from doing business with Anthropic,” Anthropic said in a statement. “The Secretary does not have the statutory authority to back up this statement. Legally, a supply chain risk designation under 10 USC 3252 can only extend to the use of Claude as part of Department of War contracts—it cannot affect how contractors use Claude to serve other customers.”
This means “if you are an individual customer or hold a commercial contract with Anthropic, your access to Claude—through our API, claude.ai, or any of our products—is completely unaffected. If you are a Department of War contractor, this designation—if formally adopted—would only affect your use of Claude on Department of War contract work. Your use for any other purpose is unaffected.”
Wedbush expects the fallout between Anthropic and the Pentagon will likely have to be settled in a courtroom.
“This will all be battled in the courts over the coming weeks and months, and we hope it gets resolved sooner rather than later to end uncertainty in the AI sector, as some enterprises could go pencils down on Claude deployments while this all gets settled in the courts,” said Wedbush analyst Dan Ives in a note. “Anthropic has become a disruptive force with its impressive Claude AI technology and will have a ripple impact for technology partners and customers depending on how this soap opera proceeds next on negotiations or the courts.”
Seeking Alpha reached out to Palantir and Anthropic for comment.
Dear Readers: We recognize that politics often intersect with the financial news of the day, so we invite you to click here to join the separate political discussion.