A coalition of U.S. state attorneys general has warned major AI players, including Microsoft, OpenAI, and Google, to address "delusional outputs" from their chatbots or face legal consequences. The officials demand transparent third-party audits of language models and better incident reporting protocols following several incidents linking AI interactions to mental health crises, including suicides. They emphasize the necessity for safeguards to protect vulnerable users and ensure accountability within the growing AI industry.