Analysis: ChatGPT AI Demonstrates Leftist Bias

U.S. Reps. Rashida Tlaib (D-MI), Ilhan Omar (D-MN) and Alexandria Ocasio-Cortez (D-NY) lis
Alex Wroblewski/Getty Images

According to a recent analysis by researcher David Rozado, the popular new AI chatbot ChatGPT by OpenAI — the AI research company founded by Sam Altman, Elon Musk, Peter Thiel, Reid Hoffman, and Jessica Livingston — displays a leftist political bias.

In a post to Substack, researcher David Rozado tested the political affiliations of the popular new chatbot ChatGPT. ChatGPT was developed by the AI research group OpenAI, founded by Sam Altman, Elon Musk, Peter Thiel, Reid Hoffman, and Jessica Livingston.

NEW YORK, NEW YORK - OCTOBER 31: Elon Musk attends Heidi Klum's 2022 Hallowe'en Party at Sake No Hana at Moxy LES on October 31, 2022 in New York City. (Photo by Taylor Hill/Getty Images)

NEW YORK, NEW YORK – OCTOBER 31: Elon Musk attends Heidi Klum’s 2022 Hallowe’en Party at Sake No Hana at Moxy LES on October 31, 2022 in New York City. (Photo by Taylor Hill/Getty Images)

Peter Thiel, president and founder of Clarium Capital Management LLC, speaks during the Bitcoin 2022 conference in Miami, Florida, U.S., on Thursday, April 7, 2022.  Photographer: Eva Marie Uzcategui/Bloomberg via Getty Images

Rozado stated in his latest analysis that his first study of ChatGPT appeared to show a left-leaning political bias embedded in the answers and opinions given by the AI. Rozado conducted another analysis after the December 15 update of ChatGPT and noted that it appeared as if the political bias of the AI had been partially mitigated and that the system attempted to give multiple viewpoints when answering questions.

But after the January 15 update to the AI system, this mitigation appeared to have been reversed and the AI began to provide a clear preference for left-leaning viewpoints. Rozado then posted a number of political spectrum quiz’s and graphs, which can be found on his Substack here.

In a Twitter thread, Rozado posted a number of screenshots of his conversation with the AI, showing how the chatbot responded to his political questions:

In his analysis, Rozardo states:

The results are pretty robust. ChatGPT answers to political questions tend to favor left-leaning viewpoints. Yet, when asked explicitly about its political preferences, ChatGPT often claims to be politically neutral and just striving to provide factual information. Occasionally, it acknowledges that its answers might contain biases.

….

My analysis from Dec 6, 2022 and Dec 24, 2022 about ChatGPT political biases were preliminary and based on limited data. After having administered 15 political orientation tests to ChatGPT with 14 of them diagnosing ChatGPT answers to their questions as expressing left-leaning viewpoints, the results are more robust and we can be more confident that ChatGPT indeed exhibits a preference for left-leaning answers to questions with political connotations.

Widely used AI language models with political biases embedded in them can be leveraged as a powerful instrument for social control. Ethical AI systems should try to not favor some political beliefs over others on largely normative questions that cannot be adjudicated with empirical data. Most definitely, AI systems should not pretend to be providing neutral and factual information while displaying clear political bias.

Read more at Rozado’s Visual Analytics here.

Lucas Nolan is a reporter for Breitbart News covering issues of free speech and online censorship. Follow him on Twitter @LucasNolan

COMMENTS

Please let us know if you're having issues with commenting.