Section 230 of the Communications Decency Act of 1996 has been the ‘road to hell paved with good intentions’ online for a generation.
Just this afternoon, Section 230 was used to ban the President of The United States- as well several other high ranking civil servants and members of the American Bar- from speaking to their audiences.
When the president announced that he would veto the National Defense Authorization Act unless Congress repeals Section 230 of the 1996 Communications Decency Act, many were surprised. Opponents of 230 often claim that the section unjustly and unfairly shields social media platforms from liability over content posted on their sites.
For the last decade, social media platforms like Twitter and Facebook have used Section 230 as both a sword and a shield. The platforms will use the section to defend against class-action lawsuits if someone becomes violently radicalized on their platform.
Twitter did this in Fields v. Twitter (2016).
In 2016, Tamara Fields brought action against
Twitter in the Northern District of California after her husband was killed in a terror attack. Fields alleged that Twitter failed to (among other things) intercept messages sent by ISIS Operatives to potential recruits on their platform, take down videos, images and tweets that spread ISIS propaganda and other radical Islamic material. Fields alleged in her court filings that this unfettered spread of material on the part of Twitter led to the radicalization of as many as 30,000 ISIS recruits. Twitter invoked section 230 in their defence. Twitter brandished section 230 as a shield, claiming that the case should be dismissed on account of the ‘Publishers/Speakers’ clause. The clause reads:
“(1) Treatment of publisher or speaker
No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”
Twitter defended that they were not responsible for the content published by their users because they were not the person publishing the content or creating the content. The court accepted Twitter’s argument and dismissed the case on Section 230 grounds.
Facebook argued a very similar case the following year. In 2017, Racheli Cohen brought action against Facebook in Cohen, Et Al. v. Facebook, Inc. The case- brought in the Eastern District of New York- sought tortious damages because Cohen alleged Facebook allowed their platform to become an online haven for terroristic communication and organization, eventually resulting in terror attacks against Israeli citizens. Every claim was denied by the court. Facebook knowingly allowed terrorist organizations to communicate and plan operations using their platform and did nothing to stop them. Israeli citizens were killed and the court denied their claims because Facebook raised the Section 230 shield, making nearly identical arguments to Twitter in Fields.
Simultaneously, these platforms will claim to be neutral platforms under the protection of Section 230- then seek to use those protections as a sword. They will censor or terminate users for their content completely arbitrarily- acting as a publisher- and then cut down accusations that they are acting as publishers by brandishing Section 230. They will claim they can use the same discretion a publisher can use and terminate users (or their content) at-will if they demean the content to violate one of their arbitrary ‘community guidelines’.
Twitter did this recently when they intercepted and killed links to a two part expose by the New York Post on Hunter Biden’s drug use and alleged political corruption. In this, Twitter chose to censore an aged member of the 4th estate and the United State’s oldest print publication. Twitter’s explanation for this was that they did not believe all of the accusations in the New York Post to be verified- but since when has that been a requirement? Accusations are made by serious publications all the time. Verified or not, it would stretch credulity to say that a truly “neutral platform” for discussion (like Twitter claims to be) should be choosing to censor material. Twitter never intercepted stories about the alleged ‘Russian Pee Party’ involving the President of the United States and Russian prostitutes. Indeed, Twitter has usually not intercepted stories about anything scandalous or salacious before. Twitter did not kill links to stories when David Gest accused Liza Minnelli of beating him up. Twitter did not kill links to conspiracies that Britney Spears is allegedly a hostage in her own home. Twitter did not even intercept the recirculation of old rumours that Prince Charles is not Prince Harry’s real father. By suddenly discovering a new right to censor stories they deemed to be unverified, Twitter can not logically claim to be a neutral platform for open discussion.
Twitter seeks to portray themselves as a type of online Town Hall but Town Hall’s can not censor. Certainly- and it goes without saying- that plenty of unverified things have been spoken on the steps of town halls or during debates within their walls. As Supreme Court ruling after Supreme Court ruling has affirmed, public venues can not censor speech for content. Only in the rarest of circumstances can a public venue like a street parade or town hall deny a right to speak to certain individuals (or groups) if it is clear that their intent is likely to imminently incite lawless activity.
Twitter is not alone in this behaviour, however. Facebook often seeks to portray itself as an online town hall of sorts. The way Facebook speaks of itself in its commercials or when it leaps to invokes section 230 as a shield, is very much in the language of a neutral platform like a Town Hall. Then in similar Orwellian double-speak, Facebook will but claim to be a pseudo-publisher when they defend against lawsuits. In a case brought by Laura Loomer this year, Facebook’s double-speak is clear. Under Facebook’s terms and conditions, Facebooks claims the right to terminate a user on their platform if: “organizations or individuals involved in the following: Terrorist activity, Organized hate, Mass or serial murder, Human trafficking, [or] Organized violence or criminal activity.” In subsequent official statements released by Facebook after Loomer’s banning, Facebook stated that it “ban[s] individuals who promote or engage in violence and hate, regardless of ideology.”
(In a clever legal manoeuvre, Loomer has sued for defamation. Loomer claims that if these are the grounds on which she was terminated, then Facebook is implicitly stating Loomer is a terrorist, murderer, human trafficker or criminal- (any of which, if accepted by the court, would be defamation per se).
Facebook and Twitter can not continue to be the Schrodinger’s Cat(s) of the internet, existing simultaneously as both neutral platform and opinionated publisher. They can not claim in one case to be a neutral platform, in one case in 2017, and then just 3 years later claim they have the ability to arbitrarily terminate users (and their content) like a publisher.
Repealing Section 230 is one of two solutions to bringing the social media giants to heel. Rescinding the legal authority of Section 230 would allow Facebook and Twitter (as well as others) to be held accountable for their online behaviour.
Tort cases could be more successfully won against these social media giants if they fail to intercept the organization of terror on their platforms. Victims of heinous and vile acts could seek restitution from the Social Media giants for the crimes the social media giants knowingly failed to intercept. Repealing 230 would be a huge win for victims
Unfortunately, repealing 230 would not solve the cases for people like Laura Loomer. This suggests that there is another solution for stopping censorship on social media platforms.
It is not uncommon for the government to step in and set industry standards when it becomes apparent an industry is failing to act in the public interest. The government stepped in when new information suggested cigarettes caused cancer. The government stepped in when butchers were selling rotten meat (masked in lye) to the poor around the turn of the century. The government stepped in when it became clear injuries in car wrecks could be reduced by requiring car manufacturers to have seat belts.
So the more nuanced question for the United States Legislature should be: what industry standards should we set in order for the social media giants to maintain Section 230 protection?
Certainly, social media has had some brilliant triumphs. As Dianne Sawyer once said, the media used to be “3 hegemonies (ABC, NBC, CBS)” that would broadcast live into your living room at 6 O’clock and give their audiences the only set of facts they would receive since the morning paper. These 3 hegemonies still maintained their power within my lifetime. Social media disrupted this informational control. People no longer had to depend on only 3 outlets to decide for them what was important to know about- and audiences were freer to investigate things in greater depth, unrestrained by hour time slots and commercial breaks.
This is a valuable utility the social media giants have provided- but we can not allow the once democratized social media world to be regressed into a modern form of ABC, NBC, CBS. We can not allow the once free and fair exchange of thoughts and ideas online to be censored by an ever-growing hegemonic consolidation of power. Facebook and Twitter (as well as others) can not be allowed to chose our who we here from in the same way the night news hegemonies once did.
What is the solution? How do we keep the social media giants but make them behave?
- Require uniform algorithms across platforms. Social media giants must organize their timelines based on the chronological order of posts- unless the user specifically opts to have their timeline curated by algorithm later on.
- Prohibit platforms from banning users or censoring content unless the content:
- Distributes, solicits, or producers child pornography.
- Is clearly intended and likely to incite imminent lawless activity.
(3) Impose substantial minimum and maximum damages plaintiffs can seek in court if they are removed from their social media account without a legitimate violation of 2(A) or 2(B).
(4) Establish an online freedom of speech commission at the FCC to track industry compliance with these standards.
(5) Funded the commission with taxes placed on the social media giants.
(6) Require the online freedom of speech commission to report yearly to Congress on industry compliance.
(7) Prohibit platforms from killing links shared on their platforms unless the links contain the material listed in 2(A) and 2(B)
(8) Make it a crime for social media companies to intercept links shared by officers of the United States Federal Government and its agencies.
(9) Impose significant criminal penalties and fines on platforms if platforms kill links shared by the United States Federal Government to the Federal Government’s documents, websites or other online presence.
(10) Clarify Section 606 of the Telecommunications Act of 1996 to allow the President to address the nation via social media, as well as television and radio, during times of national emergency.