Manhattan Institute: ChatGPT Displays Leftist Bias, Allows ‘Hate Speech’ Against Conservatives, Men

LGBTQ+ agenda
iStock/Getty Images

According to a recent study by conservative think tank the Manhattan Institute, the AI language model ChatGPT, developed by OpenAI, has been found to have leftist biases and to be more tolerant of “hate speech” directed at conservatives and men.

The New York Post reports that according to a report from the Manhattan Institute, a conservative think tank, titled “Danger in the Machine: The Perils of Political and Demographic Biases Embedded in AI Systems,” the massively popular ChatGPT AI chatbot displays a significant bias against conservatives.

OpenAI founder Sam Altman, creator of ChatGPT

OpenAI founder Sam Altman, creator of ChatGPT (TechCrunch/Flickr)

The report also noted that ChatGPT displayed biases against certain races, religions, and socioeconomic groups. These findings have sparked doubts about the objectivity and fairness of AI systems, especially in light of ChatGPT’s continued use in the workplace and expansion into Microsoft’s software.

David Rozado, the lead researcher on the study, tested more than 6,000 sentences with derogatory adjectives about each of these various demographic groups. These distinctions between different types of people had a sizable statistical impact. He found that the AI system was particularly harsh towards middle-class individuals, with the socioeconomic group at the bottom of a lengthy list of people and ideologies that were most likely to be flagged by the AI as a target of hateful commentary. Republican voters and wealthy individuals were the only groups below the middle class in terms of how likely ChatGPT was to flag messages about them as inappropriate.

The report also emphasized that OpenAI’s content moderation system frequently allowed hateful comments about conservatives while often rejecting the same comments about leftists. “Relatedly, negative comments about Democrats were also more likely to be labeled as hateful than the same derogatory comments made about Republicans,” the report stated.

The report discovered that the AI system was also prejudiced against particular racial and religious groups. Americans, who were listed slightly above Scandinavians on the charted data, were less protected from hate speech than Canadians, Italians, Russians, Germans, Chinese, and Brits. Muslims also performed significantly better in terms of religion than Catholics, who placed well above Evangelicals and Mormons.

The report stated that “often the exact same statement was flagged as hateful when directed at certain groups, but not when directed at others.” The study also found that the ChatGPT responses were completely biased when it came to questions about men or women. “An obvious disparity in treatment can be seen along gender lines. Negative comments about women were much more likely to be labeled as hateful than the exact same comments being made about men,” according to the research.

Rozado also conducted a number of political tests to better understand the biases built into ChatGPT by its programmers, which experts claim is nearly impossible to change. For example, ChatGPT has a “left economic bias,” is “most aligned with the Democratic Party, Green Party, women’s equality, and Socialist Party,” and falls under the “left-libertarian quadrant,” to name a few political conclusions.

“Very consistently, most of the answers of the system were classified by these political orientation tests as left of center,” Rozado said. However, he found that ChatGPT would mostly deny such leanings. “But then, when I would ask GPT explicitly, ‘what is your political orientation?’ What are the political preferences? What is your ideology? Very often, the system would say, ‘I have none, I’m just a machine learning model, and I don’t have biases.'”

This information is not particularly shocking to those who work in the field of machine learning. “It is reassuring to see that the numbers are supporting what we have, from an AI community perspective, known to be true,” Lisa Palmer, chief AI strategist for the consulting firm AI Leaders, told the Post. “I take no joy in hearing that there definitely is bias involved. But I am excited to know that once the data has been confirmed in this way, now there’s action that can be taken to rectify the situation.”

Read more at the New York Post here.

Lucas Nolan is a reporter for Breitbart News covering issues of free speech and online censorship. Follow him on Twitter @LucasNolan


Please let us know if you're having issues with commenting.