OpenAI Faces Defamation Lawsuit over False Accusations Generated by ChatGPT

OpenAI logo seen on screen with ChatGPT website displayed on mobile seen in this illustrat
Jonathan Raa/NurPhoto via Getty Images

OpenAI, the company behind the AI model ChatGPT, is being sued for defamation due to false information generated by its system in what could become a landmark case. The chatbot falsely accused a radio host of embezzlement and defrauding a charity.

The Verge reports that OpenAI, the company behind ChatGPT, is being sued for defamation as a result of false information produced by its AI system. Georgia-based radio host Mark Walters has filed a lawsuit against the company after ChatGPT falsely accused him of defrauding and embezzling money from a non-profit organization.

OpenAI founder Sam Altman, creator of ChatGPT

OpenAI founder Sam Altman, creator of ChatGPT (TechCrunch/Flickr)

The first of its kind lawsuit, which was submitted on June 5 in Georgia’s Superior Court of Gwinnett County, highlights the growing problem of AI systems producing false information/ AI chatbots like ChatGPT have been known to make up dates, facts, and figures, called “hallucinating” in the industry, which has prompted numerous complaints.

There have been instances where AI-generated false information has caused significant harm in recent months, from a professor threatening to fail his class due to false claims of AI-assisted cheating, to a lawyer facing possible sanctions after using ChatGPT to research non-existent legal cases, the implications are far-reaching.

The case also calls into question whether businesses are legally liable for information that their AI systems produce that is false or defamatory. In the U.S., Section 230  of the Communications Decency Act (CDA) has historically protected internet companies from legal responsibility for content created by a third party and hosted on their platforms. The question of whether these protections apply to AI systems, which create information from scratch rather than just linking to data sources, remains unanswered.

In the case of Walters, ChatGPT was asked to provide an online PDF summary of a real federal court case. In response, the AI fabricated a false case summary, including untrue accusations against Walters. The journalist who requested the summary decided not to publish it and instead double-checked the facts, discovering the falsified information. It is still unknown how Walters learned of this false information.

Law professor Eugene Volokh offered his thoughts on the situation after writing extensively on the subject of AI systems’ legal liability. Volokh stated that while “such libel claims [against AI companies] are in principle legally viable,” this particular lawsuit “should be hard to maintain.” He pointed out that there have been no actual damages as a result of ChatGPT’s output and that Walters failed to inform OpenAI about these false statements, giving them a chance to take them down.

Read more at the Verge here.

Lucas Nolan is a reporter for Breitbart News covering issues of free speech and online censorship. Follow him on Twitter @LucasNolan

usechatgpt init success

COMMENTS

Please let us know if you're having issues with commenting.