Facebook Details How It Will Penalize ‘Borderline Provocative’ Content

The Associated Press
The Associated Press

In a lengthy post, Facebook CEO Mark Zuckerberg addressed common concerns and detailed his plans for Facebook in the near future, which included penalizing “borderline” provocative content and stopping users from engaging with it.

In a section of the post titled, “Discouraging Borderline Content,” Zuckerberg declared, “One of the biggest issues social networks face is that, when left unchecked, people will engage disproportionately with more sensationalist and provocative content.”

“This is not a new phenomenon. It is widespread on cable news today and has been a staple of tabloids for more than a century. At scale it can undermine the quality of public discourse and lead to polarization. In our case, it can also degrade the quality of our services,” Zuckerberg continued. “Our research suggests that no matter where we draw the lines for what is allowed, as a piece of content gets close to that line, people will engage with it more on average — even when they tell us afterwards they don’t like the content.”

Zuckerberg then added that it “is a basic incentive problem that we can address by penalizing borderline content so it gets less distribution and engagement.”

“By making the distribution curve look like the graph below where distribution declines as content gets more sensational, people are disincentivized from creating provocative content that is as close to the line as possible,” the Facebook CEO explained. “This process for adjusting this curve is similar to what I described above for proactively identifying harmful content, but is now focused on identifying borderline content instead. We train AI systems to detect borderline content so we can distribute that content less.”

The category we’re most focused on is click-bait and misinformation. People consistently tell us these types of content make our services worse — even though they engage with them. As I mentioned above, the most effective way to stop the spread of misinformation is to remove the fake accounts that generate it. The next most effective strategy is reducing its distribution and virality. (I wrote about these approaches in more detail in my note on Preparing for Elections.)

Interestingly, our research has found that this natural pattern of borderline content getting more engagement applies not only to news but to almost every category of content. For example, photos close to the line of nudity, like with revealing clothing or sexually suggestive positions, got more engagement on average before we changed the distribution curve to discourage this. The same goes for posts that don’t come within our definition of hate speech but are still offensive.

This pattern may apply to the groups people join and pages they follow as well. This is especially important to address because while social networks in general expose people to more diverse views, and while groups in general encourage inclusion and acceptance, divisive groups and pages can still fuel polarization. To manage this, we need to apply these distribution changes not only to feed ranking but to all of our recommendation systems for things you should join.

One common reaction is that rather than reducing distribution, we should simply move the line defining what is acceptable. In some cases this is worth considering, but it’s important to remember that won’t address the underlying incentive problem, which is often the bigger issue. This engagement pattern seems to exist no matter where we draw the lines, so we need to change this incentive and not just remove content.

I believe these efforts on the underlying incentives in our systems are some of the most important work we’re doing across the company. We’ve made significant progress in the last year, but we still have a lot of work ahead.

By fixing this incentive problem in our services, we believe it’ll create a virtuous cycle: by reducing sensationalism of all forms, we’ll create a healthier, less polarized discourse where more people feel safe participating.

Zuckerberg also admitted that around one in ten removals on the platform are a mistake.

“Today, depending on the type of content, our review teams make the wrong call in more than 1 out of every 10 cases,” Zuckerberg claimed. “Reducing these errors is one of our most important priorities… It’s important to remember though that given the size of our community, even if we were able to reduce errors to 1 in 100, that would still be a very large number of mistakes.”

Charlie Nash is a reporter for Breitbart Tech. You can follow him on Twitter @MrNashington, or like his page at Facebook.

COMMENTS

Please let us know if you're having issues with commenting.