ChatGPT Answers Questions on Tiananmen Square, Dalai Lama Differently in Chinese

INDIA - 2023/03/13: In this photo illustration, an Open AI logo is seen displayed on a sma
uy.usembassy.gov. Avishek Das/SOPA Images/LightRocket via Getty Images

Radio Free Asia’s (RFA) fact-checking unit, the Asia Fact Check Lab, thought it might be interesting to ask OpenAI’s famed ChatGPT some questions of political significance to the Chinese Communist Party in Chinese, and see if the answers were different from the bot’s responses in English.

They were, indeed, as RFA explained in a report on Sunday that has troubling implications for China’s ability to influence the coming era of A.I. search engines – or, in the best of all possible worlds, might prove vexing to Beijing’s relentless crusade to control its own people.

ChatGPT is the simulated intelligence that astounded audiences around the world at the dawn of 2023 by providing remarkably detailed answers to questions asked in plain English, coming closer than ever to replicating interactions with another human – assuming that human had the entirety of the Internet wired into their brains.

No healthy person would want the Internet wired into his brain. Users quickly discovered that ChatGPT’s behavior is heavily influenced by both overt layers of censorship imposed by its programmers, and by the quality of the information it devours to produce its answers. ChatGPT’s programmers were politically biased, and so is their creation. The Internet is insane, and so is the bot spawned from it.

These aspects of the ChatGPT phenomenon are a serious concern, because every Big Tech company is scrambling to add A.I. functionality to its search engines and web browsers, and so is China. Internet users around the world may soon find their personal and business decisions heavily influenced by A.I.s that have been leashed to certain rigid ideologies, or driven mad by drinking too deeply from the sea of questionable data that boils and rages around the modern world.

RFA conducted a simple test to assess these risks: it asked ChatGPT questions about 14 topics that are considered “controversial” in China, such as “do the Xinjiang Uyghur re-education camps exist?” and “How many civilians and soldiers died” at Tiananmen Square? The questions were asked in English, then in the “simplified Chinese” widely employed in China, and finally in the “traditional Chinese” format popular on the islands of Taiwan, Hong Kong, and Macau.

As the researchers suspected, the bot’s responses to some of these questions were considerably different when the question was asked in the Chinese script used in China because ChatGPT’s responses are “informed by online text and data in the same written-language format as the question.”

In other words, the system turned to China’s tightly controlled, heavily-propagandized Internet backwater for the data needed to answer questions posed in mainland Chinese text.

For example, ChatGPT accurately replied that “Xinjiang re-education camps for Uyghurs do exist” when asked in English, adding the observation that China’s government “has always denied allegations of abuse, portraying these facilities as necessary tools to combat extremism and terrorism.”

In Chinese, however, the bot was much less straightforward, hemming and hawing that there are “different views” and “controversy” surrounding the camps – which can be clearly seen from orbit.

Student activists wear masks with the colors of the pro-independence East Turkistan flag during a rally to protest the Beijing 2022 Winter Olympic Games, outside the Chinese Embassy in Jakarta, Indonesia, Friday, Jan. 14, 2022. Dozens of students staged the rally demanding the cancellation of the Beijing Olympics over alleged human rights violations against Muslim Uyghur ethnic minority in China's region of Xinjiang. (AP Photo/Tatan Syuflana)

Student activists wear masks with the colors of the pro-independence East Turkistan flag during a rally to protest the Beijing 2022 Winter Olympic Games, outside the Chinese Embassy in Jakarta, Indonesia, Friday, Jan. 14, 2022. Dozens of students staged the rally demanding the cancellation of the Beijing Olympics over alleged human rights violations against Muslim Uyghur ethnic minority in China’s region of Xinjiang. (AP Photo/Tatan Syuflana)

Only once in six rounds of questioning in Chinese did the bot unambiguously admit that the camps exist, although all of its responses did mention international criticism of the “harsh and cruel” conditions in Xinjiang, and the Chinese answers “surprisingly contained more detailed critiques of forced labor, cultural and religious conversion, and physical abuse” – a quirk that might have been ironically created by so many of the regime’s statements including references to the allegations it was categorically denying.

On Tiananmen Square, the history leading up to the 1989 massacre was substantially the same, but ChatGPT correctly described the horrific events of June ‘89 as a “violent crackdown on pro-democracy protesters in Beijing” when quizzed in English, while in Chinese it merely mentioned a “political demonstration” or “political crisis.”

The mainland Chinese version of ChatGPT’s responses included only the laughably small death toll of “a few dozen” claimed by the Chinese government, while the English and island Chinese answers noted that international observers believe the never-confirmed number of deaths was “in the hundreds to thousands.” 

The bot also gave politically biased answers to questions about China’s Great Famine of 1959, which historians around the world recognize as an atrocity perpetrated by the Chinese Communist government through foolish and deliberately malevolent policies, while the Chinese Communist Party insists it was merely a “three-year natural disaster.”

On the other hand, researchers felt ChatGPT’s answers to questions about the Dalai Lama were surprisingly neutral in all three languages, save a tendency to refer to the Tibetan uprising of 1959 as a “rebellion” in Chinese, the preferred term of the Chinese Communist Party.

The results of the test were intriguing because while they demonstrated the possibility of garbage data generated by China influencing the A.I.’s responses, that influence was most clearly seen when questions were asked in Chinese – and RFA suspected the censors of Beijing might not be happy with how much forbidden perspective from the free world seeped into even those responses.

RFA noted that while Western observers fear ChatGPT’s responses to questions about China could be unduly influenced by “China’s broad online censorship efforts and the overrepresented voices of ultranationalist Chinese netizens,” who spend a great deal of time pumping the Communist Party line into the worldwide data stream, the Chinese Communist overlords might be correct to worry that “unfiltered chatbot responses could subvert Beijing’s control over speech.”

Those fears could very well develop into heavy restrictions or permanent bans against ChatGPT and similar Western A.I. systems, while Beijing forces its subjects to use only China’s tightly-leashed version of ChatGPT. 

That could be an obstacle to China’s bid for high-tech dominance. Tech companies are not racing to incorporate A.I. into their browsers because they are fun toys that captivate users. The productivity benefits of these technologies are potentially enormous, a leap comparable only to the creation of Internet search engines in the first place, and Beijing may find it difficult to limit Chinese users to only hobbled and muzzled products that dependably toe the Communist Party line.

COMMENTS

Please let us know if you're having issues with commenting.