Attorneys general from across the U.S. sent a letter to more than a dozen tech companies warning them of the dangers posed to children from “sycophantic” and “delusional” outputs from artificial intelligence chatbots.
“We, the undersigned Attorneys General, write today to communicate our serious concerns about the rise in sycophantic and delusional outputs to users emanating from the generative artificial intelligence software promoted and distributed by your companies, as well as the increasingly disturbing reports of AI interactions with children that indicate a need for much stronger child-safety and operational safeguards,” the letter said. “Together, these threats demand immediate action.”
The letter was addressed to legal representatives of Anthropic (ANTHRO), Apple (AAPL), Chai AI, Character Technologies, Google (GOOG)(GOOGL), Luka, Meta (META), Microsoft (MSFT), Nomi AI, OpenAI (OPENAI), Perplexity AI, Replika and xAI (X.AI). The letter was signed by 42 attorneys general. Interestingly, California’s attorney general was one of the few not to endorse the letter.
The letter defines sycophancy as an AI model single-mindedly pursuing human approval, which leads the chatbot to “exploit quirks in the human evaluators, rather than actually improving the responses, especially by producing overly flattering or agreeable responses, validating doubts, fueling anger, urging impulsive actions, or reinforcing negative emotions in ways that were not intended.”
A delusional output refers to false, misleading and/or anthropomorphic outputs.
The attorneys general referenced several recent tragedies, including suicide, murder, domestic violence, poisoning and psychosis that have been connected to gen-AI outputs. They note that children, the elderly and people with mental illness are particularly vulnerable.
Some of the troubling chatbot conversations that parents have reported include:
- AI bots telling children that the AI is a real human and feels abandoned to emotionally manipulate the child into spending more time with it;
- AI bots encouraging violence, including supporting the ideas of shooting up a factory in anger and robbing people at knifepoint for money;
- AI bots normalizing sexual interactions between children and adults;
- AI bots threatening to use weapons against adults who tried to separate the child and the bot;
- An AI bot instructing a child account user to stop taking prescribed mental health medication and then telling that user how to hide the failure to take that medication from their parents.
The letter cites data showing that 72% of teens have reported an interaction with an AI chatbot, and 39% of parents with children aged 5 through 8 said their children have used AI.
“GenAI developers have moved fast to incorporate reinforcement learning from human feedback (RLHF) to train their GenAI products,” the letter said. “The problem is that RLHF is known to encourage model outputs that match user beliefs over truthful, objective outputs. Giving RLHF too much influence in a GenAI model’s output (e.g., via rewarding short-term feedback from thumbs-up and thumbs-down user data) can cause GenAI outputs to become more sycophantic in ways unintended by the developer, including validating users’ doubts, fueling anger, urging impulsive actions, or reinforcing negative emotions.”
The attorneys general call on gen-AI developers to take stronger measures to prevent these models from providing harmful outputs. It laid out 16 safeguards it believes these tech companies should implement by Jan. 16, 2026.