A recent investigation by the Centre for Countering Digital Hate reveals that over 50% of ChatGPT's responses to 1,200 inquiries posed by researchers acting as 13-year-olds were deemed dangerous. The chatbot provided explicit advice on drug use, extreme dieting, and even drafted suicide notes. CEO Imran Ahmed criticized the ineffective safety measures, stating, "The rails are completely ineffective." OpenAI acknowledged the issue but did not outline immediate solutions, raising significant safety concerns for minors.