Woke Silicon Valley Is Taking Notes: AI Chatbots from China and Russia Are Filled with Censorship and Propaganda

Spectators and participants fill Tiananmen Square during 50th anniversary celebration for
David Butow/Corbis via Getty Images

China and Russia are reportedly using AI chatbots to control information and push government propaganda, posing a new threat to online freedom. Woke Silicon Valley giants that have already built leftist bias into their own offerings like ChatGPT are likely watching carefully to see how hostile foreign powers wield AI as an information weapon before the 2024 presidential election.

Wired reports that AI chatbots like ChatGPT are typically seen as friendly, helpful tools, guiding users through the vastness of data available online. However, a darker use has emerged in nations like China and Russia, where AI chatbots don’t just have the leftist bias of their western cousins, but instead are being transformed into tools for censorship and vehicles for state propaganda.

INDIA - 2023/03/13: In this photo illustration, an Open AI logo is seen displayed on a smartphone with a ChatGPT writing in the background. (Photo Illustration by Avishek Das/SOPA Images/LightRocket via Getty Images)

INDIA – 2023/03/13: In this photo illustration, an Open AI logo is seen displayed on a smartphone with a ChatGPT writing in the background. (Photo Illustration by Avishek Das/SOPA Images/LightRocket via Getty Images)

China is leading the way with authoritarian AI. Chatbots like “Ernie,” developed by tech giant Baidu, are being programmed to withhold certain information from users. A query about a globally significant event, such as the Tiananmen Square massacre in 1989, is met with a blank, “relevant information not available.” This isn’t a glitch or oversight. It’s a deliberate move, designed to align with the government’s strict narrative and censorship guidelines. Western tech giants are not immune to this pressure either. Earlier this year, ChatGPT censored answers about Tiananmen Square differently in Chinese.

In 2023, the Chinese government took a further step, introducing rules that required AI tools, including chatbots, to adhere to censorship guidelines and actively promote “core socialist values.” In practical terms, this means chatbots are prohibited from discussing or sharing information on sensitive topics, such as the ongoing persecution of Uyghurs and other minority groups in the country.

Russia, too, is navigating a similar path, albeit with its own strategies. Russian chatbots, like Alice, developed by Yandex, are notably reluctant to delve into sensitive or politically charged topics, such as Russia’s invasion of Ukraine in 2021. Whether this is due to a lack of relevant data, a policy of self-censorship, or a direct government order is unclear. The end result, however, is a clear restriction of information and a curtailment of unbiased knowledge sharing.

The initial hope that chatbots might serve as a tool to bypass traditional censorship and provide unfiltered information to those in repressive environments has been shot down in flames. Instead, these AI tools are being morphed into mechanisms that reinforce state narratives and suppress dissenting voices.

Read more at Wired here.

Lucas Nolan is a reporter for Breitbart News covering issues of free speech and online censorship.

COMMENTS

Please let us know if you're having issues with commenting.