In an article recently published by New York Magazine titled “Facebook Stopped Russia. Is That Enough?” the magazine looks into Facebook’s latest efforts to prevent election interference on its platform.
Ahead of the 2018 midterm elections, Facebook cracked down hard on what it identified as inauthentic accounts on their platform in an attempt to prevent election interference. The company reportedly blocked 30 accounts on Facebook and another 85 on Instagram. The blocked Facebook pages reportedly communicated mainly in French and Russian although the majority of accounts on Instagram were English-speaking. Facebook’s head of cybersecurity policy Nathaniel Gleicher stated: “Typically, we would be further along with our analysis before announcing anything publicly. But given that we are only one day away from important elections in the U.S., we wanted to let people know about the action we’ve taken and the facts as we know them today.”
The company also opened an election “war room” to combat election-related issues. Facebook’s Product Manager of Civic Engagement, Samidh Chakrabarti, said in an interview that the war room is a “physical” room which will be used to “take quick and decisive action” against possible cases of foreign interference during the midterm elections.
The New York Magazine published an article recently titled “Facebook Stopped Russia. Is That Enough?” discussed Facebook’s efforts to protect their platform from political influence campaigns. The article states:
Despite the lack of an election campaign, there was probably a small victory party in Menlo Park on Tuesday night: Finally, a major news event had passed in which no one was blaming Facebook. For all the social network’s elaborate preparation — and for all the close fact-checking attention of news organizations — the election had come and gone, and no one seemed prepared to suggest that Facebook, or social media more generally, had played a decisive role, for good or for ill. So far, the close races have come down to the kind of standard modern political campaigns we’re much more used to, not networks of state-sponsored Russian bots or Macedonian teenagers spreading conspiracy theories for money. The idea of anyone writing an article claiming that “Republicans Control the Senate Because of Facebook” seems far-fetched.
This is, needless to say, a striking change from 2016, when Facebook was blamed, sometimes entirely, for electing Donald Trump. It seems worth asking: Did the much-publicized Facebook “War Room,” dedicated to stopping misinformation and coordinated influence campaigns — or, at least, dedicated to convincing reporters that Facebook was taking those problems seriously — actually work? Is social media safe again? Has Facebook finally beaten fake news?
The article further questions if Facebook has indeed solved the issue of misinformation their platform that became such a hot topic following the 2016 presidential election:
Two years ago, in the aftermath of Trump’s unexpected victory, Facebook, and to a lesser extent Twitter, found themselves cast as some of the election’s biggest villains. What the big new social networks had done, in a word, was “misinform”: they’d created and encouraged platforms that were now being used to spread not just just hyperpartisan “news” but out-and-out misinformation, wild conspiracy theories, and divisive propaganda — some of it, we would learn, created and distributed by trolls and operatives at the direction of the Russian government. Under some pressure, and after some resistance, both companies vowed to address and fix their misinformation problems and become good citizens of the sphere.
Have they succeeded? The trouble with assessing what Facebook has managed to fix in last two years is that it is an enormous platform and that “misinformation” is a vague category. What, exactly, was the “misinformation” at issue, and how had Facebook or Twitter abetted it? In the coverage that followed the 2016 election, two particular examples came to the forefront. The first, of course, was “fake news” — the phrase that, before it became a presidential incantation, referred to the Facebook-specific phenomenon of websites built to look and feel like legitimate news sources, but which published mostly fear-mongering fictions or wish-fulfillment under the guise of “satire” for the purpose of enticing large audiences and selling advertising. The second was Russian influence operations — the now well-documented practice of Kremlin-sponsored trolls creating and posting to Facebook and Twitter pages with the intent of sowing discord, confusion, and mistrust.