The Republican Study Committee, the largest ideological caucus in the House of Representatives with 125 members, is making tech censorship a priority following the mass banning of conservatives including President Donald Trump in recent days.
A memo circulated to committee members calls for a renewed focus on proposed reforms to Section 230 of the Communications Decency Act (CDA), the law which immunizes tech companies to lawsuits arising from defamation and censorship that occurs on online platforms.
The memo recognizes that with Democrats in control of both the House and Senate, reform efforts aimed at curbing censorship are unlikely to succeed in the next two years. But efforts by House members could still send signals to the Supreme Court, which has yet to hear a case on Section 230.
One of the proposed reforms suggested by the memo would tie tech companies’ legal immunities to their remaining neutral with regards to legal content.
Congress could attempt to modify the definition of internet content provider to ensure that online platforms appreciably involved in the production of content hosted on their site are treated as internet content providers, and thus lose their immunity under Section 230. Such a reform could have significant ramifications in terms of imposing publisher liability on online platforms that fail to act passively toward content on their sites…. Social media companies that undertake expansive content moderation policies could then be considered content providers rather than merely an “interactive computer service.”
Such a reform, if enacted, could force tech companies to make any filters on constitutionally protected speech optional to users if they want to retain their legal privileges.
Another proposed reform highlighted by the memo would merely ask tech companies to apply their terms of service evenly.
Conservatives could seek to include within the definition of “information content provider” language clarifying that entities that undertake disparate moderation or censorship of similarly situated material would be treated as an “information content provider.” Such entities could then be treated as a publisher and would not receive liability protection under C1 (or protection under C2). The DOJ proposal and Ranking Member Jordan’s bill contain a provision that similarly would seek to rein in such disparate treatment. However, it does so by clarifying that content moderation undertaken in good faith and pursuant to established terms of service does not alone remove C1 immunity. Good faith in this context would, among other things, require equal application of terms of service to
similarly situated material.
While this would not remove the ability of tech companies to censor legal speech, it would force them to apply such standards evenly. For example, it might become harder for mainstream media headlines defending left-wing rioting and violence to remain hosted on online platforms.
Non-violence is an important tool for protests, but so is violence. https://t.co/DD6gaLPKuF
— Slate (@Slate) June 4, 2020
The memo also notes an important legal challenge for would-be Section 230 reform: the fact that the provision of the law related to the removal of content, subsection (c)2, might be redundant, because courts have ruled that (c)1, the immunity related to hosting content, already grants permission to censor.
C2’s liability protections have not commonly been utilized by online platforms as a defense for their editorial actions in court.9 Instead, a number of courts have held that C1 allows for online companies to exercise traditional publisher discretion, notably removing content. This is the interpretation with which Justice Thomas recently took issue. 10 Thomas noted, “…from the beginning, courts have held that §230(c)(1) protects the ‘exercise of a publisher’s traditional editorial functions—such as deciding whether to publish, withdraw, postpone or alter content.’” Thus, it follows that even if a platform lacks C2 immunity and censors a user’s content, courts could find such content moderation protected under C1. In other words, under current judicial interpretation, the text of C1 has been viewed as embodying the entire purpose of Section 230.
The memo recognizes, therefore, that reforms targeted solely at subsection (c)2, therefore, may not be sufficient to end the censorship of legal content.
The memo also highlights the bill proposed by Rep. Paul Gosar (R-AZ), which would allow the moderation of lawful content so long as such moderation is made optional to users.
This proposal is uniquely found in Congressman Gosar’s “Stop the Censorship Act.” This proposal would extend civil liability protection to platforms who let users have the option to moderate or filter the content they are seeing. By doing this at the same time as replacing C2’s “otherwise objectionable” catchall with “unlawful, or that promotes violence or terrorism,” the Stop the Censorship Act empowers the user to choose a pathway wherein the platform limits its content moderation to such defined categories (or else risks losing liability protections) or one that allows platforms to freely moderate content (without risking the loss of liability protections). This approach also maintains a light government footprint by allowing companies to offer users an online experience that integrates the preferred moderation policies of the platform.
One issue not addressed by the memo is how tech companies might respond to such efforts. For example, a company that wished to control political discussion on its platform but was forced to make such censorship optional might choose to provide users with a single filter to opt-in to, forcing them to choose between political censorship and a completely unfiltered experience, including spam and obscenity.
Allum Bokhari is the senior technology correspondent at Breitbart News. His new book, #DELETED: Big Tech’s Battle to Erase the Trump Movement and Steal The Election, which contains exclusive interviews with sources inside Google, Facebook, and other tech companies, is currently available for purchase.