Twitter’s History of Erratic Censorship Policies

LEON NEAL/AFP/Getty Images
LEON NEAL/AFP/Getty Images

Some might question the use of the term “censorship” to describe Twitter’s mysterious and erratic policies for banning users and blocking content, since Twitter is a private company and its users aren’t even paying anything for their accounts.

As long as the government isn’t involved, the argument goes, content suppression and user sanctions aren’t really “censorship.”  Anyone who disagrees with the policies of any given social media platform is free to go elsewhere.

We are also free to criticize the decisions of these social media companies, especially since they attract users with promises of privacy and celebrations of lively discourse.  It’s hypocritical in the extreme for providers to posture as if they were the digital inheritors of the Enlightenment legacy, but then carry on like a gang of Soviet apparatchiks, muzzling voices they disagree with and conducting ideological purges.

Twitter’s bizarre unverification of Breitbart Tech editor Milo Yiannopoulos isn’t the first time the social-media giant has made questionable decisions about content or users.

It was only last month that actor Adam Baldwin’s account was locked for purely ideological reasons.  He posted a message the Twitter powers-that-be didn’t like, mocking social justice warriors, even though it didn’t violate their terms of service.  Baldwin had to delete the tweet in question before his account was unlocked.

As with Milo’s unverification, Twitter didn’t want to discuss the criteria behind their decision to lock Baldwin’s account.  “Privacy and security” are the reasons always cited for these refusals to comment.

Another apparently ideological banning struck a user called LeoPirate from the GamerGate movement (of which Adam Baldwin is a prominent member.)  LeoPirate’s account was suspended several times, without explanation, before finally he finally gave up on Twitter altogether.

Let us grant that Terms of Service violations are a serious issue, and that a platform as big as Twitter must process a high volume of TOS violation complaints, including many false claims made to harass and silence certain users.  There is no reason suspected TOS violations (short of obvious incitements to violence, publication of obscene material, and so forth) should be met with summary punitive action, without giving the subject a clear notion of which rules were violated, and a chance to defend himself.  “You violated a rule but we’re not going to tell you which one” is a very low standard of professionalism for gigantic companies.

Sometimes Twitter takes punitive action without notifying anyone, including the subject.  The practice of “shadowbanning” involves quietly hiding tweets in certain regions, with some effort put into preventing the target from realizing his posts have been deleted or made invisible to many other users.

In a similar vein, Twitter has been known to use automated filters that will render tweets containing “abusive” language invisible.  The user is given no indication that anything is wrong – it looks like he’s successfully tweeted a message, but no one else ever sees it.  After the Paris terror attack in November, media organizations noticed that automated filters were blocking images and keywords deemed “sensitive,” including gruesome photos and keywords thought to be used by ISIS supporters.

Such tools are supposed to cut down on harassment and abusive tweets, with the understanding that review by human administrators is impossible – Twitter processes around 500 million messages per day.  However, the potential for tweaking these automatic filters to expand the definition of “abusive language” and turn them into tools of ideological oppression is clear.

Free-speech advocates have worried that Twitter’s decision to work with certain activist groups to crack down on “harassment” can give those groups an unhealthy degree of influence over the platform, as they file a high volume of dubious harassment charges to silence people they don’t like.  The results can be disturbingly similar to the “safe space” and crybully censorship sweeping college campuses, in which the definition of harassment is slowly expanded to include much more than vile slurs, clear-cut attempts at intimidation, and violent threats.  Complaining about bullies turns out to be a very effective means of bullying people.

Some Twitter censorship campaigns have been conducted by bypassing human administrators and deliberately abusing automated response systems.  A few years ago, there was a rash of incidents known as “spam-flagging,” in which organized mobs of Twitter users were blowing targeted individuals off the service by marking a large number of their messages as “spam,” or unsolicited advertising.  Twitter had introduced a “Block and Report Spam” feature to crack down on bot programs that were spewing ad messages into users’ timelines.  It didn’t take long for activist mobs – usually left-wing mobs targeting conservatives – to realize they could use this reporting feature to lock the accounts of their enemies.

Twitter, along with many other popular social media platforms, has been criticized for being too willing to cooperate with government demands for censorship.

The French government’s desire to suppress certain material after the Paris terror attack, as mentioned above, is one example, but more authoritarian governments have even more aggressive censorship demands.  One controversial example was Twitter agreeing to Pakistan’s demands to suppress “blasphemous” content in summer 2014.

That’s a very heavy censorship hand for a company whose CEO declared, just five years ago, “We’re the free speech wing of the free speech party.”  It’s possible to have robust free speech while policing the most obviously abusive, obscene, or threatening language, but Twitter is looking more and more like a campus “safe space,” with the attendant abuses… and even less willingness on the part of administrators to justify their actions.

COMMENTS

Please let us know if you're having issues with commenting.