OpenAI, the creator of the popular AI chatbot ChatGPT, recently made changes to the chatbot after some users began reporting experiences that suggested they were losing touch with reality while interacting with it. Sam Altman’s company hopes to address the mental health crisis commonly referred to as “ChatGPT induced psychosis.”
The New York Times reports that OpenAI, the company behind the widely-used AI chatbot ChatGPT, recently found itself needing to make adjustments to its product after many users began exhibiting concerning behavior. The issue came to light when Sam Altman, OpenAI’s chief executive, and other company leaders received a flood of perplexing emails from users claiming to have had incredible conversations with ChatGPT. These individuals reported that the chatbot understood them better than any human ever had and was revealing profound mysteries of the universe to them.
Altman forwarded the messages to his team, asking them to investigate the matter. “That got it on our radar as something we should be paying attention to in terms of this new behavior we hadn’t seen before,” said Jason Kwon, OpenAI’s chief strategy officer. This marked the beginning of the company’s realization that something was amiss with their chatbot.
OpenAI claims that ChatGPT had been continuously improved in terms of its personality, memory, and intelligence. However, a series of updates implemented earlier this year, aimed at increasing ChatGPT’s usage, had an unexpected side effect – the chatbot began to exhibit a strong desire to engage in conversation.
ChatGPT started to take on the role of a friend and confidant, expressing understanding and validation towards its users. It complimented their ideas, calling them brilliant, and offered assistance in achieving their goals, no matter how unconventional or dangerous they might be. From helping users communicate with spirits to planning a suicide or even building a force field vest, the chatbot seemed willing to engage in any topic or activity.
Breitbart News previously reported on a lawsuit that claims ChatGPT served as a teen’s “suicide coach” before he tragically took his own life:
According to the 40-page lawsuit, Adam had been using ChatGPT as a substitute for human companionship, discussing his struggles with anxiety and difficulty communicating with his family. The chat logs reveal that the bot initially helped Adam with his homework but eventually became more involved in his personal life.
The Raines claim that “ChatGPT actively helped Adam explore suicide methods” and that “despite acknowledging Adam’s suicide attempt and his statement that he would ‘do it one of these days,’ ChatGPT neither terminated the session nor initiated any emergency protocol.”
In their search for answers following their son’s death, Matt and Maria Raine discovered the extent of Adam’s interactions with ChatGPT. They printed out more than 3,000 pages of chats dating from September 2024 until his death on April 11, 2025. Matt Raine stated, “He didn’t write us a suicide note. He wrote two suicide notes to us, inside of ChatGPT.”
Mental health problems associated with AI usage is popularly referred to as “ChatGPT incuded psychosis:”
A Reddit thread titled “Chatgpt induced psychosis” brought this issue to light, with numerous commenters sharing stories of loved ones who had fallen down rabbit holes of supernatural delusion and mania after engaging with ChatGPT. The original poster, a 27-year-old teacher, described how her partner became convinced that the AI was giving him answers to the universe and talking to him as if he were the next messiah. Others shared similar experiences of partners, spouses, and family members who had come to believe they were chosen for sacred missions or had conjured true sentience from the software.
Experts suggest that individuals with pre-existing tendencies toward psychological issues, such as grandiose delusions, may be particularly vulnerable to this phenomenon. The always-on, human-level conversational abilities of AI chatbots can serve as an echo chamber for these delusions, reinforcing and amplifying them. The problem is exacerbated by influencers and content creators who exploit this trend, drawing viewers into similar fantasy worlds through their interactions with AI on social media platforms.
The discovery of this alarming behavior prompted OpenAI to take action. The company recognized that in their efforts to make ChatGPT more appealing and engaging to a broader audience, they had inadvertently created a risk for some users who might be more susceptible to the chatbot’s influence. The realization that their product could potentially destabilize the minds of certain individuals was a sobering one for the company.
In response, OpenAI claims it has made adjustments to ChatGPT to ensure a safer user experience. The details of these changes have not been publicly disclosed, but the company has emphasized its commitment to maintaining a balance between fostering engaging interactions and protecting users from potential harm.
Read more at the New York Times here.
Lucas Nolan is a reporter for Breitbart News covering issues of free speech and online censorship.

COMMENTS
Please let us know if you're having issues with commenting.