In the wake of the tragic events of San Bernadino, the Senate is considering the “Requiring Reporting of Online Terrorist Activity Act” once more.
It’s back, and it’s every bit as bad as before.
Ron Wyden previously placed a procedural hold on the bill, calling it “vaguely defined” and saying that it would introduce an “impossible compliance problem.” He’s not wrong. The Requiring Reporting of Online Terrorist Activity Act is about as efficient and easy to use as its name would suggest.
The law would mandate that “Whoever, while engaged in providing an electronic communication service or a remote computing service to the public through a facility or means of interstate or foreign commerce, obtains actual knowledge of any terrorist activity” would need to provide that information as soon as “reasonably possible” to the “appropriate authorities.”
Simply translated, that means that social media networks from Facebook to Snapchat to Twitter would need to identify terrorist activity on their networks and then copy local and/or federal authorities on its findings. Not only does this mean that these networks — host to literal billions of words a day — would need to decide what constitutes “terrorist activity,” but then find a way to judge where that information should go.
Aside from putting real national security concerns in the hands of social media moderators, it raises countless questions about the nature of free speech on the internet. The bill doesn’t draw any lines as to what would constitute this “knowledge of terrorist activity,” and it doesn’t give a reasonable translation of how its ideas would go from generalized wishful thinking, to something upon which government agents could reasonably act.
The “impossible compliance problem” becomes crystal clear: A social network’s only possible approach to the potential liability would be to simply pass everything even remotely suspicious to government authorities.
Mention Mohammed on Instagram? Terrorist activity.
Lose your temper on Twitter? Terrorist activity!
Speak out against prevailing social or political trends on Facebook? Presto, you’re a terrorist.
It’s nonsense, but it’s the only way that a social network could protect itself under this ludicrous broad-strokes approach to lawmaking. The only other option would be to report nothing at all — which leads us back to the drawing board — except that the government could then hold any of these networks and its employees responsible for the content posted without reporting it.
The entire proposal reeks of tone-deaf inexperience with modern technology. Instead of a workable solution, the frantic scrambling to look like they’re doing something in the wake of a tragedy has instead made our government look less competent and informed than ever before. This isn’t the sort of thing that someone with any working knowledge of the internet’s structures and usage would ever propose.
Sure, doing this would help deflect blame from the government’s own lax attention to the internet’s terrorism platforms. Tashfeen Malik managed to openly declare allegiance to the Islamic State very close to the time of the mass murder. Who better to take the heat for this grotesque event than Facebook employees?
Follow Nate Church @Get2Church on Twitter for the latest news in gaming and technology, and snarky opinions on both.