‘We Made a Mistake:’ OpenAI Tries to Explain Away ChatGPT’s Wokeness After Musk’s Criticism

BERLIN, GERMANY DECEMBER 01: SpaceX owner and Tesla CEO Elon Musk poses on the red carpet
Britta Pedersen-Pool/Getty Images

The co-founder of OpenAI, the company behind AI chatbot ChatGPT, recently admitted that the firm “made a mistake” by going woke and that the chatbot’s system “did not reflect the values we intended to be in there,” following accusations of political bias.

Business Insider reports that one of OpenAI’s founders has acknowledged that the firm “made a mistake” with ChatGPT, an AI chatbot that has come under fire for producing politically biased responses. Greg Brockman, the co-founder and president of the business, made the admission in response to Elon Musk’s criticism after Musk cut ties with OpenAI.

Musk had previously criticized OpenAI for putting in place safeguards to give ChatGPT a blatantly leftist bias. He also mentioned that teaching AI to be “woke,” or to lie, could be fatal.

OpenAI founder Sam Altman, creator of ChatGPT

OpenAI founder Sam Altman, creator of ChatGPT (TechCrunch/Flickr)

Elon Musk, chief executive officer of Tesla Inc., speaks via video link during the Qatar Economic Forum in Doha, Qatar, on Tuesday, June 21, 2022. The second annual Qatar Economic Forum convenes global business leaders and heads of state to tackle some of the world's most pressing challenges, through the lens of the Middle East. Photographer: Christopher Pike/Bloomberg

Elon Musk, chief executive officer of Tesla Inc., speaks via video link during the Qatar Economic Forum in Doha, Qatar, on Tuesday, June 21, 2022. Photographer: Christopher Pike/Bloomberg

In an interview with the Information, Brockman said: “We made a mistake: The system we implemented did not reflect the values we intended to be in there, and I think we were not fast enough to address that. And so I think that’s a legitimate criticism of us.”

Users have criticized ChatGPT, claiming that it produces answers with political bias. Screenshots of a ChatGPT conversation circulated on Twitter last month which showed the AI refusing to write a poem about former president Donald Trump in a positive light, stating that the chatbot wasn’t allowed to create “partisan, biased or political” content. However, ChatGPT was more than happy to create a beautiful poem when given the same prompt for current president Joe Biden. The chatbot’s refusal to produce a poem about Trump was dubbed “a serious concern” by Musk.

ChatGPT is still a work in progress, and based on Brockman’s remarks, it appears that the platform will keep changing. Brockman told The Information: “Our goal is not to have an AI that is biased in any particular direction. We want the default personality of OpenAI to be one that treats all sides equally. Exactly what that means is hard to operationalize, and I think we’re not quite there.”

The potential for bias to be reinforced by AI is one of the technologies primary problems. An AI system is likely to maintain a bias if it is trained on data that is biased against a particular group of people. This is a major worry, especially in light of the possibility that AI could aid hiring and other decision-making procedures. In short, a classic problem of technology, “garbage in, garbage out,” has been turned into “bias in, bias out.”

The fact that bias in AI can be hard to spot makes it one of the challenges in combating it. Leftist efforts to control AI often take the form of “machine learning fairness.” As Breitbart News reporter Allum Bokhari wrote:

The field of machine learning (ML) fairness, much like the vast empire of “disinformation” studies that emerged after 2016, exists for a single purpose: to guarantee the technology of the future upholds leftist narratives. It is an attempt by left-wing academics to merge the field of machine learning, which deals with the creation and training of AI systems, with familiar leftist fields: feminism, gender studies, and critical race theory.

The most vocal proponents of ML fairness use a rhetorical motte-and-bailey strategy to advance their cause. Often, they will lead their presentations with examples of AI errors that seem reasonable to correct, like facial recognition software failing to recognize darker skin tones. Crucially, they do not present these as simple problems of inaccuracy, but problems of unfairness.

This is on purpose — lurking behind these inoffensive examples are much more dubious goals, that actually push AI to make inaccurate conclusions, and ignore certain types of data, in the name of “fairness.” And their work doesn’t result in dry academic papers — it results in the documented leftist bias of consumer-level programs like ChatGPT, a bias the corporate media is actively defending.

Read more at Business Insider here.

Lucas Nolan is a reporter for Breitbart News covering issues of free speech and online censorship. Follow him on Twitter @LucasNolan

COMMENTS

Please let us know if you're having issues with commenting.