Facebook Announces AI Program to Flag Offensive Content in Live Streams

Facebook "Instant Games" is launching in 30 countries with 17 titles available
AFP

Facebook has announced it is developing an artificial intelligence program in order to analyse and flag offensive live streaming videos.

Reuters reports that Facebook is expanding their existing automatic image detection system, which is used to flag nudity or offensive imagery in photos and provide facial detection functionalities, to include Facebook Live videos. Facebook’s director of Applied Machine Learning, Joaquin Candela, said the current system is “an algorithm that detects nudity, violence, or any of the things that are not according to our policies.”

Facebook began using similar software to flag offensive video content back in June, but the company is just now implementing the feature in its live video streaming service. The artificial intelligence being used to flag the content in live videos is still in its experimental stage and has a few obstacles to overcome before it’s fully functional.

Candela discussed some of the problems the program faces, saying,  “One, your computer vision algorithm has to be fast, and I think we can push there, and the other one is you need to prioritize things in the right way so that a human looks at it, an expert who understands our policies, and takes it down.”

Facebook stated that the automation process won’t be entirely automatic. It has been used to process tens of millions of user reports each week, but these reports are forwarded on to a Facebook reviewer with the appropriate subject matter expertise to decide whether or not the content should be deleted. Supposedly Facebook’s live video flagging process will work in a similar manner.

Lucas Nolan is a reporter for Breitbart Tech covering issues of free speech and online censorship. Follow him on Twitter @LucasNolan_ or email him at lnolan@breitbart.com

COMMENTS

Please let us know if you're having issues with commenting.