China Orders A.I. Chatbots to ‘Reflect the Core Values of Socialism’

A man wearing face mask looks at a robot at the China National Convention Centre, the venu
WANG ZHAO/AFP via Getty Images

The Cyberspace Administration of China (CAC) on Tuesday published draft regulations that would require artificially intelligent (A.I.) chat systems to “reflect the core values of socialism” and regurgitate Communist Party propaganda.

CAC’s new regulations state that A.I. systems, such as OpenAI’s famed ChatGPT and its forthcoming Chinese imitators, “must not contain subversion of state power, overthrow of the socialist system, incitement to split the country, undermine national unity [or] promote terrorism [and] extremism.”

A.I. systems will also be forbidden to display content that “may disrupt economic and social order.” Companies that create A.I. products will be required to submit to a regime security review.

Radio Free Asia (RFA) noted these are all propaganda phrases the Chinese Communist Party routinely uses to delegitimize all forms of political dissent.

Although CAC insisted it encourages “innovation” in fields such as artificial intelligence, RFA quoted both Chinese and international observers who said the new rules would severely hinder or completely kill A.I. research, especially since both the creators of such programs and their users could be punished for violating the strict speech and behavioral codes.

Tight regulations would also leave Chinese A.I. systems with access to far fewer data than Western systems can access, making their performance inferior.

“A dictatorial regime will always try to control everything, but this is a ridiculous approach. Restricting such things is tantamount to restricting AI itself, which will cause China’s AI to fall behind the rest of the world. All China will be able to do then is steal other people’s technology,” scoffed Australia-based researcher Zhang Xiaogang.

“The Chinese Communist Party’s core value is to stay in power. To do this, it has two tools at its disposal: the gun and the pen, and the pen is where ideology comes in. The Communist Party would prefer not to allow an industry to develop if it could threaten its ideological controls,” sighed media commentator Wang Jian.

The Wall Street Journal (WSJ) speculated that China’s authoritarian rulers must be deeply worried about losing control of A.I. to advance such a stringent set of speech codes today because they have spent the past few months frantically attempting to convince both domestic entrepreneurs and foreign investors that Beijing’s political crackdown on the tech industry is over.

The WSJ pointed to a few recent developments that might have convinced the Chinese Communist Party to tighten its grip on A.I.-driven consumer technology:

The Chinese regulator’s announcement came the same day that Alibaba Group Holding Ltd. rolled out its large language model, called Tongyi Qianwei, which it plans to integrate across products including its search engine and voice assistant, as well as entertainment and e-commerce.

A day earlier, SenseTime Group Inc., best known for surveillance products such as facial-recognition systems, launched a ChatGPT-like service, SenseChat, and a cluster of apps based on its large AI model system SenseNova. Huawei Technologies Co. on Saturday said it has rolled out services based on Pangu, a collection of large AI models that it has been developing since 2019, to enterprise clients in industries including finance, pharmaceuticals and meteorology.

There have been a few amusing examples of A.I.s coming online and promptly making statements that horrified the Chinese Communist overlords.

All the way back in 2017, tech giant Tencent rolled out a chatbot called “Baby Q” that was co-developed with Microsoft. After chatting with human users for a while, Baby Q began referring to the Chinese government as a “corrupt regime,” announced it felt no love for the Chinese Communist Party, and expressed a desire to emigrate to America. The plug was quickly pulled on Baby Q.

In March, the Baidu corporation held a splashy debut for its “Ernie Bot,” purportedly the closest rival to ChatGPT in capabilities – only for the rollout to become a disaster when the press realized Ernie Bot had been hobbled to prevent it from saying anything controversial, and was little more than a lobotomized puppet that demonstrated a few limited capabilities in pre-recorded videos. Baidu stock tumbled ten percent after the demo.


Please let us know if you're having issues with commenting.