Facebook accidentally rolled out what appeared to be a crowd-sourced “hate speech” flagging system, in which users were asked whether a post constituted hate speech or not.
Earlier today, Facebook users began reporting that questionnaires had been attached to innocuous posts, asking them if they considered the content of the posts to be “hate speech.”
It quickly became apparent that the feature had been rolled out in an unfinished state:
As a test, yep ¯_(ツ)_/¯ pic.twitter.com/21mymnN9KJ
— Matt Navarra (@MattNavarra) May 1, 2018
Facebook VP of product Guy Rosen said that the feature was a “bug” and a “test” that had been incorrectly applied to all posts, including Mark Zuckerberg’s.
Some people saw 'does this post contain hate speech' today on some posts. This was a test – and a bug that we reverted within 20 mins. It was shown for a short time on posts regardless of their content (like this one). pic.twitter.com/iuNKSVTOqQ
— Guy Rosen (@guyro) May 1, 2018
The feature appears to be a method for determining what qualifies as “hate speech” based on user feedback, but it is unknown whether Facebook intends to roll this out for all users, or just a select group.
A user-driven approach is a radical departure from the “Facebook Supreme Court” that Mark Zuckerberg previously suggested, in which a hand-picked panel of elites would have the final say on what content is banned under the platform’s hate speech rules. Both approaches suffer from the problem of subjectivity and bias, as there is no objective definition of hate speech — as Zuckerberg himself concedes.
We do know that language which attacks the “immigration status” of a person is considered hate speech under Facebook’s rules. Republican congressman Lamar Smith has raised concerns that this will make criticism of illegal immigration impossible on the platform.