YouTube Will Warn Users to ‘Keep Comments Respectful’ if AI Thinks Their Post Is Mean

In this Tuesday, Feb. 28, 2017, file photo, YouTube CEO Susan Wojcicki speaks during the i
AP Photo/Reed Saxon

Google’s YouTube is taking further steps to crack down on what it considers bullying, hate speech, and negative comments on its platform by introducing a new feature reminding users to “keep comments respectful.” When the company’s automated systems detect comments that may be offensive, it will warn users before allowing them to post their comments.

Engadget reports that YouTube, the video hosting platform owned by tech giant Google, is taking steps to crack down on bullying, hate speech, and generally negative comments on the platform by introducing a new feature that will ask users to “keep comments respectful” when automated system detect a comment they are about to post may be offensive.

The new change is rolling out on Android first but will expand to other platforms in the future. The feature is not unlike one employed by Facebook’s photo-sharing app Instagram which displays a pop-up with a reminder to “keep comments respectful” before users post. The user can then edit their comment or post it anyway.

The new update is part of a broader effort by YouTube to reduce hate speech and make its platform more attractive to new creators. Alongside new pop-ups, YouTube will also test a new feature for YouTube Studio that will automatically filter out “potentially inappropriate and hurtful comments” so that video creators can more easily avoid seeing them.

YouTube also announced this year that it has “heard concerns” from some creators that there is bias in its monetization features, but that the site does not have an accurate way to investigate the claims as it does not have data on how its video creators “identify.”

To address potential bias, YouTube plans to ask channel owners to voluntarily disclose information about their gender, sexual orientation, race, and ethnicity so that it can investigate whether there may be a bias against specific groups in its systems. In a blog post, YouTube wrote: “We’ll then look closely at how content from different communities is treated in our search and discovery and monetization systems. We’ll also be looking for possible patterns of hate, harassment, and discrimination that may affect some communities more than others.”

YouTube has promised to disclose its findings and says that it is “committed to working to fix” issues it discovers.

Lucas Nolan is a reporter for Breitbart News covering issues of free speech and online censorship. Follow him on Twitter @LucasNolan or contact via secure email at the address lucasnolan@protonmail.com

COMMENTS

Please let us know if you're having issues with commenting.