Report: Google Training Computers to Be Offended to Keep Ads Off Objectionable Content

In this photo illustration the Google logo is reflected in the eye of a girl on February 3, 2008 in London, England. Financial experts continue to evaluate the recent Microsoft $44.6 billion (?22.4 billion) offer for Yahoo and the possible impact on Internet market currently dominated by Google. (Photo by …
Chris Jackson/Getty Images

Google is training its advanced computer systems to be offended over potentially distasteful advert placements, according to a report by The New York Times.

“Over the years, Google trained computer systems to keep copyrighted content and pornography off its YouTube service. But after seeing ads from Coca-Cola, Procter & Gamble and Wal-Mart appear next to racist, anti-Semitic or terrorist videos, its engineers realized their computer models had a blind spot: They did not understand context,” reported The New York Times. “Now teaching computers to understand what humans can readily grasp may be the key to calming fears among big-spending advertisers that their ads have been appearing alongside videos from extremist groups and other offensive messages.”

“Google engineers, product managers and policy wonks are trying to train computers to grasp the nuances of what makes certain videos objectionable,” the NYT continued. “Advertisers may tolerate use of a racial epithet in a hip-hop video, for example, but may be horrified to see it used in a video from a racist skinhead group.”

Several large companies pulled their advertising from Google and its sites, including YouTube, last month, citing their adverts appearing on “extremist” and “offensive” content.

The companies included AT&T, Verizon, Johnson & Johnson, The BBC, The Guardian, Channel 4, Toyota, McDonald’s, and even the British Government, prompting Google to pledge a stand against offensive content.

“We take this as seriously as we’ve ever taken a problem,” said Google’s Chief Business Officer, Philipp Schindler. “We’ve been in emergency mode.”

“Computers have a much harder time understanding context, and that’s why we’re actually using all of our latest and greatest machine learning abilities now to get a better feel for this,” he continued. “No system can be 100 percent perfect. But we’re working as hard as we can to make it as safe as possible.”

In a blog post announcement, Google pledged to tackle “hateful” videos on YouTube, as well as provide advertisers with new tools to limit their reach.

However, despite their pledge to crack down on extreme content, a large number of YouTube’s top content creators and personalities claim to have been unfairly affected by the changes, which have caused many of them to lose substantial amounts of advertisement revenue.

Charlie Nash is a reporter for Breitbart Tech. You can follow him on Twitter @MrNashington or like his page at Facebook.