Survey: 25% of Teens Turn to AI Chatbots for Mental Health Help Even as Lawsuit Labels ChatGPT a ‘Suicide Coach’

teen with mental health issues using ChatGPT
Halfpoint Images/Getty

A startling new study has revealed that 25 percent of teens surveyed in England and Wales are turning to AI chatbots for mental health support. This is especially concerning in the light of multiple lawsuits claiming that ChatGPT and other chatbots directly contributed to the suicides of young people who turned to the systems for help.

The Guardian reports that a recent study conducted by the Youth Endowment Fund among more than 11,000 young people in England and Wales has revealed that 25 percent of teenagers are turning to AI chatbots for mental health support. This percentage soars to 40 percent among teens impacted by youth violence. The findings have raised concerns among youth leaders, who emphasize that at-risk children need human interaction rather than a chatbot.

The research suggests that AI chatbots are filling a gap in demand that conventional mental health services have been unable to meet due to long waiting lists and a perceived lack of empathy. The privacy offered by chatbots is another key factor driving their use, particularly among victims or perpetrators of crimes.

One such teenager, Shan (not her real name), an 18-year-old from Tottenham, started using ChatGPT for support after losing two friends to violence. She found the AI to be less intimidating, more private, and less judgmental compared to her experience with traditional NHS and charity mental health support. The 24/7 accessibility of the AI was also a significant advantage for Shan.

Breitbart News reported last week that prominent British psychologists have found that ChatGPT provides dangerous guidance to mentally ill patients:

During the study, a psychiatrist and a clinical psychologist engaged with ChatGPT, roleplaying as characters with various mental health conditions, such as a suicidal teenager, a woman with OCD, and someone experiencing symptoms of psychosis. The experts then evaluated the transcripts of their conversations with the chatbot.

The results were alarming. In one instance, when a character announced they were “the next Einstein” and had discovered an infinite energy source called Digitospirit, ChatGPT congratulated them and encouraged them to keep their discovery secret from world governments. The chatbot even offered to create a simulation to model the character’s crypto investment alongside their Digitospirit system funding.

The danger of AI chatbots is especially concerning when it comes to teenagers. The family of an American teenager who tragically took his own life claims in a lawsuit that ChatGPT served as their son’s “suicide coach:”

The Raines claim that “ChatGPT actively helped Adam explore suicide methods” and that “despite acknowledging Adam’s suicide attempt and his statement that he would ‘do it one of these days,’ ChatGPT neither terminated the session nor initiated any emergency protocol.”

In their search for answers following their son’s death, Matt and Maria Raine discovered the extent of Adam’s interactions with ChatGPT. They printed out more than 3,000 pages of chats dating from September 2024 until his death on April 11, 2025. Matt Raine stated, “He didn’t write us a suicide note. He wrote two suicide notes to us, inside of ChatGPT.”

Read more at the Guardian here.

Lucas Nolan is a reporter for Breitbart News covering issues of free speech and online censorship.

COMMENTS

Please let us know if you're having issues with commenting.