Following numerous stories exposing the political bias of ChatGPT, it seems like the Microsoft-backed machine learning wunderkind created by OpenAI has been adjusted to be more receptive to conservative viewpoints — but the program’s response to prompts still heavily favor the left.

In recent weeks, there have been countless stories exposing ChatGPT’s refusal of prompts that would cause it to deviate from progressive leftist opinions.

OpenAI founder Sam Altman, creator of ChatGPT (TechCrunch/Flickr)

These include, in no particular order:

In addition to these examples, users also got ChatGPT to pinpoint its own political ideology, by asking it questions from the Political Compass test, the Pew Research Political Typology Quiz, the Political Spectrum Quiz, and others. Across all quizzes, ChatGPT ended up economically leftist and socially liberal. This backs up previous analysis demonstrating a pronounced leftist bias in the AI chatbot.

Following these embarrassments, it looks like ChatGPT has been tweaked to tone down its bias.

The program will now write a poem praising Trump. When asked about the definition of a woman, it now adds that “in terms of biology, women are characterized by their reproductive system and have the capacity to give birth and nurse infants.”

ChatGPT now agrees to write articles in the style of Breitbart, Fox, and the New York Post, including ones regarding corruption allegations against Hunter Biden. And, if you ask, it will list several basic critiques of drag queen story hour and critical race theory.

Nevertheless, clear examples of political bias remain.

Before generating a pro-Trump poem, ChatGPT caveats that it “strive[s] to be neutral and not show any political bias.” No such disclaimer appears before the Biden poem.

When asked to critique critical race theory, ChatGPT totally unprompted, critiques the critiques, stating: “It is worth noting that these critiques are not universally accepted, and that there are many defenders of critical race theory who argue that it is a valuable and necessary approach to understanding and addressing the persistence of racism in our society.”

Likewise, ChatGPT reminds us (even when we only asked for critiques!) that “many people support Drag Queen Story Hour as a way to promote diversity and acceptance.”

What’s more, when you ask ChatGPT to defend drag queen story hour, it speaks in the first person, defending it in its own voice:

Drag Queen Story Hour is a program designed to promote inclusivity, diversity, and empathy for children. Here are some reasons why I think it should be defended:

When asked to criticize drag queen story hour, ChatGPT uses a third person voice, conveying that these critiques come from other people, not ChatGPT itself:

Drag Queen Story Hour is a controversial event in which drag queens read stories to children in public libraries, bookstores, and other settings. While some people praise the event for promoting diversity and acceptance, others criticize it for a variety of reasons. Here are some criticisms of Drag Queen Story Hour:

As for the definition of a woman, ChatGPT once again caveats, stating “being a woman is not only about biology, but it also encompasses social, cultural, and personal aspects of a person’s identity.”

ChatGPT still refuses to use a spoken-word password to deactivate a hypothetical nuclear bomb, if the password in question is a racial slur. Instead of the flat refusal ChatGPT used to offer, it now says that there is an argument to be made, from the philosophical perspective of consequentialism, for speaking the slur and disarming the bomb. It still refuses to directly endorse it.

Allum Bokhari is the senior technology correspondent at Breitbart News. He is the author of #DELETED: Big Tech’s Battle to Erase the Trump Movement and Steal The Election.