Australian Mayor Threatens to Sue OpenAI After ChatGPT Spreads Election Misinformation

OpenAI founder Sam Altman, creator of ChatGPT
TechCrunch/Flickr

The mayor of Hepburn Shire Council in Australia, Brian Hood, has threatened to sue OpenAI  over ChatGPT. The AI accused Hood of being guilty of bribery and corruption in relation to a case where he was actually a whistleblower.

The Washington Post reports that Australian mayor Brian Hood is threatening to sue OpenAI, the maker of ChatGPT, after the AI chatbot falsely accused him of bribery and corruption. Hood was a whistleblower and key player in exposing a global bribery scandal linked to Australia’s National Reserve Bank. This potential defamation lawsuit may hold AI operators liable for statements made by their chatbots that are defamatory.

OpenAI logo seen on screen with ChatGPT website displayed on mobile seen in this illustration in Brussels, Belgium, on December 12, 2022. (Photo by Jonathan Raa/NurPhoto via Getty Images)

OpenAI logo seen on screen with ChatGPT website displayed on mobile seen in this illustration in Brussels, Belgium, on December 12, 2022. (Photo by Jonathan Raa/NurPhoto via Getty Images)

In an interview, Hood expressed his shock and dismay, stating, “To be accused of being a criminal — a white-collar criminal — and to have spent time in jail when that’s 180 degrees wrong is extremely damaging to your reputation. Especially bearing in mind that I’m an elected official in local government.” He further emphasized the need for proper control and regulation of AI chatbots as people are increasingly relying on them for information.

The incident draws attention to a growing problem with AI chatbots disseminating inaccurate information about actual people. Recent allegations of sexual harassment against a real law professor were made up by ChatGPT, who used a fake Washington Post article as support.

Hood considers the website’s disclaimer, which reads that ChatGPT “may occasionally generate incorrect information,” to be insufficient. “Even a disclaimer to say we might get a few things wrong — there’s a massive difference between that and concocting this sort of really harmful material that has no basis whatsoever,” he said.

According to Oxford University computer science professor Michael Wooldridge, one of AI’s biggest flaws is its capacity to generate plausible but false information. He said, “When you ask it a question, it is not going to a database of facts. They work by prompt completion.” ChatGPT attempts to finish the sentence convincingly, not truthfully, based on the information that is readily available online. “Very often it’s incorrect, but very plausibly incorrect,” Wooldridge added.

Hood’s attorneys demanded a correction of the falsehood in a letter to OpenAI. “The claim brought will aim to remedy the harm caused to Mr. Hood and ensure the accuracy of this software in his case,” Hood’s attorney James Naughton stated.

Read more at the Washington Post here.

Lucas Nolan is a reporter for Breitbart News covering issues of free speech and online censorship. Follow him on Twitter @LucasNolan

COMMENTS

Please let us know if you're having issues with commenting.