Facebook is believed to be testing a new application which will help it eliminate so-called “fake news” from the social media site.
Three separate Twitter users have reported instances of Facebook including a “survey” below links to news items, asking account holders with the platform: “To what extent do you think that this link’s title withholds key details of the story?”
Readers are asked to respond from a choice of five options, ranging from “not at all” to “completely”. The stories linked were from Rolling Stone magazine, The Philadelphia Inquirer, and Chortle, a news site which reports on comedy.
Spotted this survey at the bottom of a Facebook post earlier. Must be part of a crackdown on clickbait. Interesting ♀️ pic.twitter.com/wZbLhof9k1
— Tom F (@_tomaf) December 2, 2016
A Facebook survey to see how accurate a Rolling Stone headline is. Pizzagate shows that information on social media fucking matters. pic.twitter.com/i4PIsbFhYF
— Jorge (@iamjorgecamargo) December 5, 2016
Facebook has come under fire by the mainstream media for the propagation of so-called fake news, following the election of Donald Trump as America’s next president.
In November, The Guardian noted: “Rather than connecting people – as Facebook’s euphoric mission statement claims – the bitter polarisation of the social network over the last eighteen months suggests Facebook is actually doing more to divide the world.”
According to The Guardian, it is not clear how Facebook intends to use the information collected by the survey.
However, in a post to the network on November 18, Facebook founder Mark Zuckerberg acknowledged that the site was committed to tackling “misinformation”.
“Historically, we have relied on our community to help us understand what is fake and what is not. Anyone on Facebook can report any link as false, and we use signals from those reports along with a number of others – like people sharing links to myth-busting sites such as Snopes – to understand which stories we can confidently classify as misinformation. Similar to clickbait, spam and scams, we penalize this content in News Feed so it’s much less likely to spread,” said Mr. Zuckerberg.
He conceded that the problem was “complex, both technically and philosophically,” as the platform believes in “giving people a voice”. However, he said he believed the solution lay in “stronger detection” by improving Facebook’s “ability to classify misinformation”, as well as “making it much easier for people to report stories as fake will help us catch more misinformation faster”.
Zuckerberg also pledged to liaise with the news industry, saying: “We will continue to work with journalists and others in the news industry to get their input, in particular, to better understand their fact checking systems and learn from them.”
The experiment raises serious concerns over censorship on the global platform.
Zuckerberg has previously come under fire for colluding with the European Union on censorship and the promotion of “independent counter-narratives”.
In May, it was one of four major internet companies, along with YouTube, Twitter, and Microsoft, to sign up to a partnership with the European Commission “to respond to the challenge of ensuring that online platforms do not offer opportunities for illegal online hate speech to spread virally”.
Members of the European Parliament branded the partnership “Orwellian”.