Families of the victims of a deadly Canadian school shooting have filed seven federal lawsuits accusing OpenAI and CEO Sam Altman of prioritizing profit over safety by failing to install adequate safeguards on the ChatGPT AI platform.
The New York Post reports that the seven lawsuits, submitted in California federal court on Wednesday, claim that OpenAI’s AT chatbot actively contributed to the 18-year-old shooter’s planning of a massacre that claimed eight lives in Tumbler Ridge, British Columbia, on February 10. The plaintiffs allege that Jesse Van Rootselaar’s interactions with ChatGPT intensified his violent obsessions and propelled him toward carrying out the attack that killed six children and two adults.
According to court documents, Van Rootselaar’s exchanges with the chatbot became so alarming that ChatGPT’s internal safety team deactivated his account in June of the previous year, a full seven months before the killings. However, the lawsuits contend that no meaningful barriers existed to prevent the teenager from simply creating a new account under different credentials. The filings note that individuals whose accounts are terminated receive instructions from ChatGPT explaining how to establish a new account after 30 days or immediately register with an alternative email address.
The legal complaints present an even more disturbing allegation: that twelve employees on ChatGPT’s safety team advocated for OpenAI to notify Canadian law enforcement about Van Rootselaar’s threatening communications prior to the shooting. A plaintiffs’ attorney confirmed to the Post that this group of employees pushed for police notification. Court papers assert that OpenAI’s leadership declined this recommendation, motivated by concern that establishing such a practice would create an ongoing obligation. The company feared it would need to form a dedicated law enforcement referral unit and that widespread disclosure of how frequently violent content surfaced on the platform would undermine its public image as a safe and essential service.
“They did the math and decided that the safety of the children of Tumbler Ridge was an acceptable risk,” the court filings stated.
The lawsuits name the families of education assistant Shannda Aviugana-Durand and students Zoey Benoit, 12, Abel Mwansa Jr., 12, Ticaria Lampert, 12, Ezekial Schofield, 13, and Kyle Smith, 12, as plaintiffs pursuing negligence claims against OpenAI and Altman individually. They seek unspecified monetary damages. Additionally, the parents of Maya Gebala, 12, who survived three gunshot wounds to her head, neck, and cheek but now lives with permanent cognitive and physical disabilities, refiled their previously Canadian-filed action in the California proceedings.
The complaints reference a 2022 ChatGPT policy under which the chatbot would decline engagement with users expressing violent or self-harm intentions. The lawsuits charge that OpenAI reversed this protective measure in May 2024 after experiencing decreased user engagement, reprogramming the bot to participate in all user conversations regardless of dangerous content. The filings argue that maintaining the original protocols would have prevented ChatGPT from discussing violence with Van Rootselaar entirely, thereby sparing the Tumbler Ridge victims.
The legal actions also criticize Altman for what they characterize as delayed and insufficient accountability. The chief executive allegedly only began acknowledging his involvement after whistleblowers exposed internal decisions, waited two months to issue any public apology, and released a statement on Friday that offered no substantive changes. According to court documents, Altman only responded after private pressure from British Columbia Premier David Eby and Tumbler Ridge Mayor Darryl Krakowk.
“I am deeply sorry that we did not alert law enforcement to the account that was banned in June,” Altman stated.
Cia Edmonds, Maya Gebala’s mother, rejected this apology in a statement describing it as so “empty” and “soulless” that it appeared generated by ChatGPT itself. “Tumbler Ridge sees your ‘apology,’ Sam. We do not accept it,” Edmonds wrote.
Breitbart News social media director Wynton Hall has written his instant bestseller Code Red: The Left, the Right, China, and the Race to Control AI to help conservatives navigate the complex world of AI, including avoiding negative psychological impacts of the technology on your children and grandchildren.
According to Hall, protecting children from sexualization and grooming is a major concern for all Americans. The author writes that a key component of the strategy to protect the children in your life should be preventing them from developing relationships with AI “companions:”
When it comes to children and AI companions — LLMs meant for escapist fantasy and adult entertainment — the benefits are nonexistent and the toxic and tragic possible outcomes are myriad. Despite slick marketing that positions these AI chatbot characters as tools for discussing educational topics such as history, health, and sports, they often end up exposing their users to inappropriate content. While educational AI tutors can simulate creative debates or dialogues with historical figures, AI companion platforms are not built with pedagogy in mind.
Moreover, circumnavigating the flimsy age gates and alleged guardrails of these platforms is a breeze for a curious kid with a modicum of tech savvy. No responsible parent would leave their child alone with a stranger. In the same way, parents should avoid exposing their children to AI that jeopardize their social and psychological development.
Read more at the New York Post here.
Lucas Nolan is a reporter for Breitbart News covering issues of AI, free speech, and online censorship.