OpenAI’s CEO Sam Altman told his staff that the company was working on a deal that could help solve the impasse between Anthropic and the Pentagon over the use of AI on the battlefield, The Wall Street Journal reported.
No deal has been signed, and the talks can fall through, the report added, citing a person with knowledge of the matter.
OpenAI did not immediately respond to a request for comment from Seeking Alpha.
In a note to staff, Altman said that the company was working with the Defense Department to see if its models can be used in classified settings in a way that keeps the same safety guardrails that have seen Anthropic at odds with the government. Altman said he hoped OpenAI could find a solution that could work for the rest of the industry.
“We are going to see if there is a deal with the DoW that allows our models to be deployed in classified environments and that fits with our principles,” said Altman in the memo. “We would ask for the contract to cover any use except those which are unlawful or unsuited to cloud deployments, such as domestic surveillance and autonomous offensive weapons.”
On Thursday, Anthropic’s (ANTHRO) CEO Dario Amodei rejected the Department of War’s demand for unrestricted access to its AI models. Pentagon leaders, including U.S. Secretary of Defense Pete Hegseth, wanted Anthropic to loosen restrictions on the use of AI.
Altman said he hoped to help broker a peace between the two parties, according to the report.
“We would like to try to help de-escalate things,” Altman wrote in the note, the report noted.
OpenAI is looking to hold these guardrails via technical, not contractual, means, such as exploring a contract that would only allow its technology to be used from the cloud, not in so-called edge cases — effectively forbidding its use in autonomous weapons without humans in the loop, the report added.
“We would also build technical safeguards and deploy personnel (FDEs) to partner with the government to ensure things are working correctly, and we would offer similar services to other allied nations,” said Altman. “If we are successful, perhaps this can be a path that can work for other AI labs, too.”
Anthropic — which is backed by Amazon (AMZN) and Alphabet’s (GOOG) (GOOGL) Google — said in its statement on Thursday that “frontier AI systems are simply not reliable enough to power fully autonomous weapons. We will not knowingly provide a product that puts America’s warfighters and civilians at risk.”
Altman showed his support for Anthropic’s position in principle but acknowledged the government’s concerns about a private company having control over significant national security issues, the report noted.
“We have long believed that AI should not be used for mass surveillance or autonomous lethal weapons, and that humans should remain in the loop for high-stakes automated decisions. These are our main red lines,” said Altman.
Altman added that “We believe this dispute isn’t about how AI will be used, but about control. We believe that a private US company cannot be more powerful than the democratically-elected US government, although companies can have lots of input and influence. Democracy is messy, but we are committed to it.”
Earlier on Friday, OpenAI confirmed that it raised $110B in its latest funding round, with $50B coming from Amazon (AMZN) and $30B apiece from Nvidia (NVDA) and SoftBank (SFTBY).
Dear readers: We recognize that politics often intersects with the financial news of the day, so we invite you to click here to join the separate political discussion.