Twitter Won’t Say if Dehumanizing Language About Whiteness Violates Rules

CANNES, FRANCE - JUNE 21: Co-chair / founder of Twitter Jack Dorsey attends the ' #SheInsp
Francois Durand/Getty Images for Twitter; Edit: BNN

Twitter failed to respond to repeated requests for comment about a tweet from the Root that compared whiteness to a disease, refusing to say if the tweet violates its policies against dehumanizing language.

The social media platform updated its policy on hate speech in December last year, with a specific focus on dehumanizing language.

“Our primary focus is on addressing the risks of offline harm, and research shows that dehumanizing language increases that risk,” said Twitter in a statement announcing the change.

Yet a tweet stating “whiteness is a pandemic,” from a verified account with over 600,000 remains on the platform.

In the linked article, the Root blogger Damon Young makes extensive comparisons between white people and disease:

Whiteness is a public health crisis. It shortens life expectancies, it pollutes air, it constricts equilibrium, it devastates forests, it melts ice caps, it sparks (and funds) wars, it flattens dialects, it infests consciousnesses, and it kills people.

Twitter failed to respond to two inquiries as to whether the tweet and the linked article violates its rules.

Those rules specifically state that users of the platform may not post content “that intends to dehumanize, degrade or reinforce negative or harmful stereotypes” about people on the basis of race, ethnicity, caste, national origin, gender, sexual orientation, religious affiliation, and a number of other categories.

To justify the December expansion of its hateful conduct policy, and its renewed focus on dehumanizing language, Twitter cited a blog post from the far-left Dangerous Speech project, which argued that language comparing people to diseases often precedes “outbreaks of mass violence.”

Via the Dangerous Speech Project:

Dangerous Speech is any form of expression (speech, text, or images) that can increase the risk that its audience will condone or participate in violence against members of another group. Susan Benesch coined this term (and founded the Dangerous Speech Project) after observing that fear-inducing, divisive rhetoric rises steadily before outbreaks of mass violence and that it is often uncannily similar, even in different countries, cultures, and historical periods. We call these rhetorical similarities ‘hallmarks’ of Dangerous Speech. One of them is dehumanization, or referring to people as insects, despised animals, bacteria, or cancer. This can make violence seem acceptable: if people seem like cockroaches or microbes, it’s okay to get rid of them.

There is an ongoing debate, even in the left-wing field of sociology, about whether there is a causal link between dehumanizing language and violence. UCLA sociology professor Aliza Luft argues that dehumanizing language is an outcome of participation in violence rather than an underlying cause. Nevertheless, Twitter, which won’t even answer a question about the Root’s tweet, cites the contrary argument in the justification for its rules.

Allum Bokhari is the senior technology correspondent at Breitbart News. He is the author of #DELETED: Big Tech’s Battle to Erase the Trump Movement and Steal The Election.


Please let us know if you're having issues with commenting.