The U.S. has drafted strict new rules for civilian artificial intelligence contracts amid a dispute between the Trump administration and Anthropic (ANTHRO) over how the tech start-up’s AI models are used for government purposes, The Financial Times reported.
According to draft guidelines accessed by the FT, the new rules will require AI companies that want to do business with the federal government to permit “any lawful” use of their models.
Defense Secretary Pete Hegseth designated Anthropic (ANTHRO) a “Supply-Chain Risk to National Security.”
The General Services Administration, which assists the government in purchasing software, has drawn up new guidance over the past few months as part of a federal effort to enhance the procurement of AI services.
The guidance says that “the contractor must not intentionally encode partisan or ideological judgments into the AI systems data outputs.” It requires companies to provide “a neutral, non-partisan tool that does not manipulate responses in favor of ideological dogmas such as diversity, equity, and inclusion.”
The GSA will be “soliciting further comments” from the industry before finalizing the new guidance, the person said, adding that it is similar to the guideline the Pentagon is drawing up to target military contracts.
GSA’s subsidiary, the Federal Acquisition Service, has inked agreements with leading AI companies, including Microsoft (MSFT)-backed OpenAI (OPENAI), Meta (META), xAI (XAI), and Google (GOOG).
The report comes as Anthropic (ANTHRO), a generative AI company backed by Amazon (AMZN) and Alphabet’s (GOOG) (GOOGL) Google, is trying to resolve a standoff that led the Trump administration to label it as a “Supply-Chain Risk” last week.