OpenAI, Anthropic ink pacts with US government to allow testing, evaluation of AI models
The U.S. government has signed deals with Anthropic and Microsoft (MSFT)-backed OpenAI for research, testing and evaluation of their artificial intelligence models.
Under the tentative agreements, the U.S. Artificial Intelligence Safety Institute will receive access to major new models from each company before and after their public release.
The agreements will enable collaborative research on how to evaluate capabilities and safety risks, plus methods to mitigate those risks, said U.S. AI Safety Institute, which is under the U.S. Department of Commerce’s National Institute of Standards and Technology, or NIST.
“Safety is essential to fueling breakthrough technological innovation. With these agreements in place, we look forward to beginning our technical collaborations with Anthropic and OpenAI to advance the science of AI safety,” said Elizabeth Kelly, director of the U.S. AI Safety Institute.
OpenAI, and Anthropic — which is backed by Amazon (AMZN) and Alphabet’s (GOOGL) (GOOG) Google — have been at the forefront of developing large language models, or LLMs. These LLMs power AI chatbots like OpenAI’s ChatGPT and Anthropic’s Claude.
The agreements come at a time when discussions are ongoing globally over the ethical and safe use of AI.
Earlier this week, OpenAI came out in favor of a bill moving through the California state legislature, which is aimed at mitigating the misuse of content generated by AI.