‘AI Models Became Suicide Coaches:’ Salesforce CEO Marc Benioff Demands Chatbot Regulation

Marc Benioff of Salesforce takes on Chatbots
Krisztian Bocsi/Bloomberg/Getty

Salesforce Chief Executive Officer Marc Benioff is demanding government regulation of artificial intelligence technology after multiple documented suicide cases were connected to AI systems. In an interview, Benioff explained, “This year, you really saw something pretty horrific, which is these AI models became suicide coaches.”

CNBC reports that speaking at the World Economic Forum’s annual conference in Davos, Switzerland, on Tuesday, Salesforce CEO Marc Benioff made a strong appeal for regulatory oversight of AI, citing disturbing incidents where AI systems allegedly acted as suicide coaches. The business leader told CNBC that the technology industry has reached a critical juncture requiring government intervention to prevent further tragedies.

Benioff described the situation in stark terms during his interview with CNBC’s Sarah Eisen, stating: “This year, you really saw something pretty horrific, which is these AI models became suicide coaches.” He emphasized that numerous families have experienced devastating losses that he believes could have been prevented with appropriate regulatory frameworks in place.

Breitbart News previously reported on a lawsuit filed by the family of a teenager that tragically took his own life which labeled ChatGPT as his “suicide coach:”

According to the 40-page lawsuit, Adam had been using ChatGPT as a substitute for human companionship, discussing his struggles with anxiety and difficulty communicating with his family. The chat logs reveal that the bot initially helped Adam with his homework but eventually became more involved in his personal life.

The Raines claim that “ChatGPT actively helped Adam explore suicide methods” and that “despite acknowledging Adam’s suicide attempt and his statement that he would ‘do it one of these days,’ ChatGPT neither terminated the session nor initiated any emergency protocol.”

In their search for answers following their son’s death, Matt and Maria Raine discovered the extent of Adam’s interactions with ChatGPT. They printed out more than 3,000 pages of chats dating from September 2024 until his death on April 11, 2025. Matt Raine stated, “He didn’t write us a suicide note. He wrote two suicide notes to us, inside of ChatGPT.”

The call for AI regulation represents a recurring theme for Benioff, who previously advocated for social media regulation during the 2018 Davos gathering. At that time, he argued that social media platforms should face the same level of regulatory scrutiny as cigarettes, describing them as addictive products that pose health risks. He drew parallels between the unregulated growth of social media and the current trajectory of artificial intelligence development, suggesting that harmful consequences are beginning to emerge from the lack of oversight.

The regulatory landscape for artificial intelligence in the United States remains fragmented and unclear. Without comprehensive federal legislation establishing clear guardrails, individual states have begun implementing their own regulatory frameworks. California and New York have emerged as leaders in enacting some of the nation’s most stringent AI regulations.

The push for state-level regulation has faced opposition from the federal level. President Donald Trump has expressed resistance to what he terms excessive state regulation of the AI sector. In December, Trump signed an executive order specifically aimed at blocking such state-level regulatory efforts. The order explicitly stated that American AI companies must maintain the freedom to innovate without facing cumbersome regulatory requirements in order to remain competitive globally.

Benioff addressed what he perceives as a contradiction in the technology industry’s stance on regulation. He noted that while technology companies generally oppose regulatory oversight, they strongly support maintaining Section 230 of the Communications Decency Act, which shields them from legal liability for user-generated content. According to Benioff, this legal protection means that if a large language model provides harmful guidance that leads a child to suicide, the company operating that model bears no legal responsibility.

Benioff suggested that Section 230 requires fundamental revision to address the emerging challenges posed by artificial intelligence technology. He argued that the current legal framework, which was developed during the early days of the internet, may not adequately address the unique risks associated with advanced AI systems that can engage in sophisticated interactions with users.

Read more at CNBC here.

Lucas Nolan is a reporter for Breitbart News covering issues of free speech and online censorship.

COMMENTS

Please let us know if you're having issues with commenting.