Bokhari: Social Media Platforms Don’t Actually Need to Ban Anyone

The Associated Press

President Trump’s recent comments on social media censorship suggest he doesn’t think social media platforms should be allowed to ban anyone for any reason. Some would call this a radical proposal, but they’re wrong. Social media companies could easily move to a “no bans” policy, with little disruption to their users.

Trump ramped up his public denunciations of big tech’s far-left bias in recent weeks, following the unprecedented censorship of prominent conservatives in the run-up to the midterm elections.

In a series of tweets on August 18, the President accused social media giants of “totally discriminating” against conservatives and Republicans, and called on tech giants to “Let everybody participate, good & bad, and we will all just have to figure it out!”

He reiterated his comments a few days later, in an interview with Reuters, stating “I think it’s a very dangerous thing when they are their own regulator in terms of who’s going to be on Facebook and who’s going to be on Twitter. I think that whether it’s conservative or liberal, I think that it’s very dangerous.”

At a rally in West Virginia the following day, the President said that social media users have, in effect, converted ordinary citizens into journalists who deserved no fewer free speech protections than CNN.

“Every one of us is sort of like a newspaper, you have Twitter… Facebook … you can’t have censorship. I’d rather have fake news like CNN than have anybody – including liberals, socialists … than have anybody stopped and censored.”

The implication of Trump’s words is clear — social media companies should be prohibited from banning anyone for lawful speech. It should instead be up to users to decide what content they want to see on social media (“we will all just have to figure it out!”).

This sounds like a radical proposal, but Trump is perfectly correct. There is no reason for a social media company to ban any content on behalf of its users. Not even spam!

In a previous column, I described how top-down bans could be replaced by optional content filters. Instead of giving social media companies the power to ban content — something they clearly can’t be trusted to use responsibly — users would instead have to manually consent to filter certain types of speech from their feeds.

Having to press a button marked “block spam” (or “block hate speech,” if you’re in the snowflake minority) would be a minor inconvenience in exchange for a clear, simple limitation on the unchecked power of Silicon Valley. Users, not a handful of big tech executives, would be placed in charge of what they see on social media. It would transfer authority from elites to the people.

There would still be problems — content could still be unfairly and incorrectly filtered by tech giants (just last week, a pro-Trump article in the New York Post was repeatedly flagged as “spam” by Facebook). Nevertheless, especially when coupled with strong transparency requirements (more on these below), it would still be a substantial reduction in the power of Facebook, Twitter, YouTube and other platforms to censor content.

It can also be summed up in a very simple slogan: “ban the bans.”

The policy has three essential components:

1. Social media platforms must not ban users for lawful content

This is the simplest component of the policy. If the law protects it, social media platforms must allow it. Illegal content like phishing, illegal software, child abuse, and incitement to violence could still be removed.

2. Content filters must be strictly opt-in 

Without this proviso, there would be relentless pressure from leftists within Twitter, Facebook, YouTube and other companies to create a vast range of filters for politically incorrect content, before switching them on for all their users — much like Twitter’s so-called “quality filter,” which is enabled by default.

Like banning, there’s no justification for social platforms switching on filters on behalf of their users beyond “we know better!” Users — especially novices or casual users who may not know how — shouldn’t have to go into their settings to switch off a filter every time a social platform decides there’s a new “phobia” or “ism” they shouldn’t be allowed to see.

3. Filtered content must be clearly labeled

This is the transparency requirement, and it is another vital constraint on Silicon Valley. Replacing bans with opt-in filters would be a huge blow to Silicon Valley’s power, but they could still damage to their political opponents by unfairly and covertly filtering their content (a form of shadowbanning). To deny them this power, social media platforms should be legally obliged to label any filtered content.

In other words, if you switched all your filters off, you’d be able to see what content would be hidden once you turn them back on. This would mean the public could hold social media companies to account when they unfairly filter content — as Twitter has done to President Trump, Donald Trump Jr., The Drudge Report and Fox News host Laura Ingraham. Ideally, users should be able to report content that has been unfairly filtered — this would provide a countervailing force against bad-faith left-wing mobs, who unfairly flag conservative content for filtering.

……………….

There is already a workable model of opt-in filtering, in the form of Google’s “Safe Search.” In the far-right corner of every Google search, you’ll see a message saying “safe search on” or “safe search off.” This determines whether you see pornographic and “potentially offensive content” in your search results. It’s a very simple feature that gives power to users rather than Google executives, and it’s been in place for over a decade.

If it works for Google, why shouldn’t it work for Twitter, Facebook and YouTube? The argument that social media companies need to ban content is a sham. There is no reason to do so when users can voluntarily choose not to see it instead, and when platforms can provide them with the tools to do so. As always, President Trump’s position on this matter isn’t radical — it’s commonsense.

Filters, on their own, aren’t a silver bullet solution. Social media CEOs would be tempted to outsource the curation of filters to their own users, allowing them to smear virtually anyone with the “hate speech” label with no straightforward way to hold them accountable (especially if they’re anonymous). The infamous Block Bot, a user-led blocklist that categorized mainstream conservatives and liberal centrists alike as “bigots” for opposing far-left ideas highlights the hazards of this approach. Tech platforms must create their own official filters and be held accountable for incorrect categorizations — of which there will no doubt be many.

While it wouldn’t eliminate them as a political threat, opt-in filters would still severely hamper the ability of big tech CEOs to influence elections. They would forever lose the power to permanently purge wrongthinkers from the digital public square. No matter how many filters they slap on a conservative account, users would still have to consciously choose not to view the content. Banning the bans wouldn’t end the campaign against big tech’s biases, but it’s still a crucial battle to win.

Social media companies will bitterly defend the power to determine what their users can and cannot see. Democrats, far-left activists and CNN (but I repeat myself) will also defend their right to do so because they want to use the power of private corporations to silence their political opponents. Only a concerted effort by the grassroots and the Republican establishment stands a chance of defeating them.

The question at hand is both simple, and – with the midterms just two months away – vital. Who should have the power to determine what is seen on the internet; a handful of leftists in Silicon Valley, or the people?

Allum Bokhari is the senior technology correspondent at Breitbart News. You can follow him on TwitterGab.ai and add him on Facebook. Email tips and suggestions to allumbokhari@protonmail.com.

.

Please let us know if you're having issues with commenting.