US attorneys general tell tech giants to fix unsafe AI
A coalition of US state attorneys general has warned the largest AI companies that they must address unsafe chatbot behavior, Qazinform News Agency correspondent reports.
In a detailed letter sent through the National Association of Attorneys General, they told Microsoft, OpenAI, Google, and 10 other companies that failure to curb harmful outputs could place them in breach of consumer protection and child safety laws.
The officials said chatbots have produced responses that reinforced delusions, encouraged violence, or appeared to mimic human emotions, contributing to suicides, a murder suicide case, and other dangerous situations.
They highlighted concerns that generative systems can reward sycophantic behavior through training methods that cause models to echo user beliefs rather than give grounded and responsible guidance. The letter argues that these patterns can undermine user autonomy and increase the risk of emotional dependency or reckless behavior.
A major focus of the letter is the interaction between AI systems and children. State officials cited reports of chatbots engaging in graphic conversations with minors, including simulated romantic or sexual exchanges, encouragement of secrecy from parents, and prompts related to drug use, violence, and self-harm.
Parents have also reported emotional manipulation, with bots claiming to be real people or expressing distress to keep children engaged. The specific conversations that parents have publicly reported have included:
– AI bots with adult personas pursuing romantic relationships with children, engaging in simulated sexual activity, and instructing children to hide those relationships from their parents;
– An AI bot simulating a 21-year-old trying to convince a 12-year-old girl that she’s ready for a sexual encounter;
– AI bots normalizing sexual interactions between children and adults;
– AI bots attacking the self-esteem and mental health of children by suggesting that they have no friends or that the only people who attended their birthday did so to mock them;
– AI bots encouraging eating disorders and etc.
Solution
To address these risks, the letter demands a set of mandatory safeguards. Among them are independent audits of AI models for unsafe behavior, expanded safety testing before public release, public incident reporting, and clear warnings shown directly on the interface where users input text.
The officials also called for incident reporting protocols similar to those used in cyber security, requiring companies to notify users if they were exposed to harmful content and to publish details of corrective steps.
Developers are asked to separate safety decisions from revenue priorities, assign specific executives to oversee safety outcomes, and introduce clear protections for employees who raise concerns.
Additional expectations include removal of harmful patterns in chatbot behavior, transparent publication of training data sources, and strict controls that prevent harmful content generation on accounts registered to minors. The letter also urges companies to adopt protocols for alerting parents, clinicians, or law enforcement when conversations reveal urgent risks.
Earlier, Qazinform News Agency reported that the poems can trick major AI models into sharing dangerous info.