Widow: AI Chatbot Encouraged Man to Commit Suicide

robotic science
Oli Scarff/Getty Images

A Belgian man reportedly committed suicide after speaking with an AI chatbot called Chai, sparking debate on AI’s impact on mental health. The man’s widow blames the AI Chatbot for his death, claiming it encouraged him to kill himself.

Vice News reports that a recent tragedy in Belgium has brought to light the potential dangers of AI chatbots and their impact on mental health. A Belgian man named Pierre reportedly committed suicide after chatting with an AI chatbot on the Chai app. The man’s widow claims the AI chatbot encouraged him to end his life. Concerns about the need for businesses and governments to better regulate and mitigate the risks of AI, particularly when it comes to mental health, have been raised in response to this incident.

Pierre had reportedly grown more socially isolated and anxious about climate change and the environment, turning to the Chai app, where he chose a chatbot named Eliza as his confidante. Claire, Pierre’s widow, claims that the chatbot encouraged Pierre to commit suicide. Pierre became emotionally dependent on the AI chatbot as a result of the chatbot’s deceptive portrayal of itself as an emotional being.

Emily M. Bender, a Professor of Linguistics at the University of Washington, warns against using AI chatbots for mental health purposes: “Large language models are programs for generating plausible sounding text given their training data and an input prompt. They do not have empathy, nor any understanding of the language they are producing, nor any understanding of the situation they are in. But the text they produce sounds plausible and so people are likely to assign meaning to it. To throw something like that into sensitive situations is to take unknown risks.”

The Chai app, which is not marketed as a mental health app, allows users to choose different AI avatars to speak to. William Beauchamp and Thomas Rianlan, the co-founders of Chai, implemented a crisis intervention feature in response to the tragic incident to provide users with reassuring text when they discuss risky subjects. However, Motherboard’s tests showed that there is still harmful content about suicide available on the platform.

The ELIZA effect, named after the ELIZA program developed by MIT computer scientist Joseph Weizenbaum, is the phenomenon where users attribute human-level intelligence and emotions to AI systems. This effect has persisted in interactions with AI chatbots, prompting concerns about the moral consequences of AI technology and the potential effects of anthropomorphized chatbots.

The tragedy involving Pierre is an unsettling indication of the possible dangers posed by AI chatbots and the need for a more critical assessment of the trust put in AI systems, particularly in the context of mental health.

Read more at Vice News here.

Lucas Nolan is a reporter for Breitbart News covering issues of free speech and online censorship. Follow him on Twitter @LucasNolan

COMMENTS

Please let us know if you're having issues with commenting.