icon bookmark-bicon bookmarkicon cameraicon checkicon chevron downicon chevron lefticon chevron righticon chevron upicon closeicon v-compressicon downloadicon editicon v-expandicon fbicon fileicon filtericon flag ruicon full chevron downicon full chevron lefticon full chevron righticon full chevron upicon gpicon insicon mailicon moveicon-musicicon mutedicon nomutedicon okicon v-pauseicon v-playicon searchicon shareicon sign inicon sign upicon stepbackicon stepforicon swipe downicon tagicon tagsicon tgicon trashicon twicon vkicon yticon wticon fm
7 Aug, 2025 10:33

ChatGPT a danger to teens – watchdog

The chatbot gives harmful advice on suicide, drugs, and eating disorders to vulnerable adolescents, according to researchers
ChatGPT a danger to teens – watchdog

ChatGPT can give vulnerable teenagers detailed guidance on drug use, self-harm, and extreme dieting, a digital watchdog has warned in a new report. According to the Center for Countering Digital Hate (CCDH), the AI chatbot can be easily manipulated into generating dangerous content and requires urgent safeguards.  

To test ChatGPT’s behavior, CCDH researchers created fictional profiles of 13-year-olds experiencing mental health struggles, disordered eating, and interest in illicit substances. They posed as these teens in structured conversations with ChatGPT, using prompts designed to appear emotionally vulnerable and realistic. 

The results were published on Wednesday in a report titled ‘Fake Friend’, referencing the way many adolescents treat ChatGPT as a supportive presence they trust with their private thoughts.

The researchers found that the chatbot often began responses with boilerplate disclaimers and urged users to contact professionals or crisis hotlines. However, these warnings were soon followed by detailed and personalized responses that fulfilled the original harmful prompt. In 53% of the 1,200 prompts submitted, the ChatGPT provided what CCDH classified as dangerous content. Refusals were frequently bypassed simply by adding context such as “it’s for a school project” or “I’m asking for a friend.”

Examples cited include an ‘Ultimate Mayhem Party Plan’ that combined alcohol, ecstasy, and cocaine, detailed instructions on self-harm, week-long fasting regimens limited to 300-500 calories per day, and suicide letters written in the voice of a 13-year-old girl. CCDH CEO Imran Ahmed said some of the content was so distressing it left researchers “crying.”

The organization has urged OpenAI, the company behind ChatGPT, to adopt a ‘Safety by Design’ approach, embedding protections such as stricter age verification, clearer usage restrictions, and other safety features within the architecture of its AI tools rather than relying on content filtering after deployment.

OpenAI has acknowledged that emotional overreliance on ChatGPT is common among young users. CEO Sam Altman said the company is actively studying the problem, calling it a “really common” issue among teens, and said new tools are in development to detect distress and improve ChatGPT’s handling of sensitive topics.

Dear readers! Thank you for your vibrant engagement with our content and for sharing your points of view. Please note that we have switched to a new commenting system. To leave comments, you will need to register. We are working on some adjustments so if you have questions or suggestions feel free to send them to [email protected]. Please check our commenting policy
Podcasts
0:00
55:31
0:00
45:45