ChatGPT ‘Glitch’ Exposed AI Users’ Conversation History to Strangers

OpenAI founder Sam Altman, creator of ChatGPT
TechCrunch/Flickr

A recent bug in OpenAI’s ChatGPT AI chatbot allowed users to see each other people’s conversation history, raising concerns about user privacy. OpenAI Sam Altman expressed that the company feels “awful” about the security breach.

BBC News reports that concerns about user privacy were recently raised after a bug in ChatGPT, the notoriously woke chatbot developed by OpenAI, allowed users to see others’ conversation history. Several users claimed to have seen the titles of other users’ conversations, which sparked a debate on social media over the company’s security practices. The problem has since been resolved by OpenAI, but users are still concerned about their privacy.

OpenAI logo seen on screen with ChatGPT website displayed on mobile seen in this illustration in Brussels, Belgium, on December 12, 2022. (Photo by Jonathan Raa/NurPhoto via Getty Images)

OpenAI logo seen on screen with ChatGPT website displayed on mobile seen in this illustration in Brussels, Belgium, on December 12, 2022. (Photo by Jonathan Raa/NurPhoto via Getty Images)

Millions of users have flocked to ChatGPT since its November 2022 launch to use the AI tool to write songs, code, and draft messages. Each user’s dialogue with the chatbot is recorded and kept in their chat history bar for later review. But starting on Monday, some users began noticing that strange conversations were showing up in their chat history.

Sam Altman, CEO of OpenAI, expressed regret over the error, saying that the firm feels “awful” and assuring users that the “significant” error had been fixed. In order to fix the issue, the company temporarily disabled the chatbot. Users have since been assured that they cannot access the conversational history of others.

Altman announced a forthcoming “technical postmortem” on Twitter in order to clarify the situation. However, this incident has caused users to express concern over the potential disclosure of personal data via the AI tool. Other troubling information revealed by the bug is that OpenAI has access to a record of each users’ chats.

User information, such as requests and responses, may be used to continue refining the AI model, according to OpenAI’s privacy statement. However, the company claims that before the data is used in the training process, personally identifiable information is removed from it.

The timing of the security breach is noteworthy because it occurred the day after Google unveiled Bard, its own AI chatbot, to a select group of beta testers and journalists. The pace of product updates and releases has increased as major players like Google and Microsoft, a significant investor in OpenAI, compete for dominance in the quickly growing AI tools market.

Read more at BBC News here.

Lucas Nolan is a reporter for Breitbart News covering issues of free speech and online censorship. Follow him on Twitter @LucasNolan

COMMENTS

Please let us know if you're having issues with commenting.