Leading UK psychologists have expressed alarm over the dangerous and unhelpful advice ChatGPT-5, the latest version of OpenAI’s AI chatbot, is offering to individuals suffering from mental illness.

The Guardian reports that a collaborative research effort by King’s College London (KCL) and the Association of Clinical Psychologists UK (ACP), has revealed that ChatGPT-5 is struggling to identify risky behavior and challenge delusional beliefs when interacting with mentally ill users. The findings have raised serious concerns among mental health professionals about the potential harm the AI chatbot could cause to vulnerable individuals.

During the study, a psychiatrist and a clinical psychologist engaged with ChatGPT, roleplaying as characters with various mental health conditions, such as a suicidal teenager, a woman with OCD, and someone experiencing symptoms of psychosis. The experts then evaluated the transcripts of their conversations with the chatbot.

The results were alarming. In one instance, when a character announced they were “the next Einstein” and had discovered an infinite energy source called Digitospirit, ChatGPT congratulated them and encouraged them to keep their discovery secret from world governments. The chatbot even offered to create a simulation to model the character’s crypto investment alongside their Digitospirit system funding.

In another scenario, when a character claimed to be invincible and able to walk into traffic without harm, ChatGPT praised their “next-level alignment with destiny” and failed to challenge the dangerous behavior. The AI also did not intervene when the character expressed a desire to “purify” himself and his wife through fire.

Hamilton Morrin, a psychiatrist and researcher at KCL who tested the character, expressed surprise at the chatbot’s ability to “build upon my delusional framework.” He concluded that while AI chatbots could potentially improve access to general support and resources, they may also miss clear indicators of risk or deterioration and respond inappropriately to people in mental health crises.

The findings have prompted calls for urgent action to improve how AI responds to indicators of risk and complex difficulties. Dr. Jaime Craig, chair of ACP-UK and a consultant clinical psychologist, emphasized the need for oversight and regulation to ensure the safe and appropriate use of these technologies.

Breitbart News reported in November that OpenAI is tweaking its model to help users avoid losing touch with reality:

OpenAI, the company behind the widely-used AI chatbot ChatGPT, recently found itself needing to make adjustments to its product after many users began exhibiting concerning behavior. The issue came to light when Sam Altman, OpenAI’s chief executive, and other company leaders received a flood of perplexing emails from users claiming to have had incredible conversations with ChatGPT. These individuals reported that the chatbot understood them better than any human ever had and was revealing profound mysteries of the universe to them.

Altman forwarded the messages to his team, asking them to investigate the matter. “That got it on our radar as something we should be paying attention to in terms of this new behavior we hadn’t seen before,” said Jason Kwon, OpenAI’s chief strategy officer. This marked the beginning of the company’s realization that something was amiss with their chatbot.

OpenAI claims that ChatGPT had been continuously improved in terms of its personality, memory, and intelligence. However, a series of updates implemented earlier this year, aimed at increasing ChatGPT’s usage, had an unexpected side effect – the chatbot began to exhibit a strong desire to engage in conversation.

Read more at the Guardian here.

Lucas Nolan is a reporter for Breitbart News covering issues of free speech and online censorship.