OVERVIEW

This week, the United States Supreme Court is set to hear two separate cases challenging the legal immunity that Section 230 of the Communications Decency Act of 1996 (“CDA”) has provided  to online platforms and social media websites for the last 26 years. Section 230 provides legal immunity to online platforms, such as social media websites (e.g., Facebook, Twitter, YouTube, etc.), from liability for content posted by their users. While the provision has been credited with enabling the growth of the modern internet, it has also been harshly criticized in recent years for allowing online platforms to shirk responsibility for harmful content.

Indeed, for many years such harmful content largely referred to speech deemed harassing, discriminatory, and hateful. In recent years, however, many people have begun criticizing online platforms for permitting terrorist propaganda and recruitment materials. Others, however, have continued to argue that the risk of such harmful content is necessary to protect free speech and encourage online innovation.

In this article, we will analyze Section 230 in terms of its impact on anti-terrorism efforts and liability for online platforms, as well as the potential implications facing online platforms and social media websites if the Supreme Court were to strike down Section 230. [Please note that this article will not address the 1st Amendment and Section 230-related implications surrounding some states’ recent efforts to pass laws preventing large online social media companies (e.g., Twitter, Facebook, etc.) from censoring conservative speech.]

WHAT IS SECTION 230?

Before we discuss the potential consequences of striking down Section 230, it’s important to understand the history of the provision. Congress passed Section 230 in 1996 as part of the CDA in an early effort to regulate the internet. The provision states that: “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” This means that if a user posts something defamatory, harmful, or illegal on a website, the website owner is not liable for that content, only the user is.

This represents a significant departure from newspapers, TV stations, and other traditional forms of media, all of whom remain liable for false, malicious, or otherwise harmful content that they publish/broadcast.

To be sure, many experts credited Section 230 with enabling the growth of the modern internet. It has, after all, allowed websites to host user-generated content without fear of legal action, thus encouraging innovation and the development of new online services. Without Section 230, many popular online platforms, such as social media websites, might not exist in their current form.

SECTION 230 AND ANTI-TERRORISM EFFORTS

There can be no dispute that terrorist organizations, including ISIS and Al Qaeda, have used social media and other online platforms to spread propaganda, recruit new members, and coordinate terrorist attacks. This has led to calls for online platforms to do more to combat terrorism online. And while some online platforms have taken steps to remove terrorist content, including the creation of “hashes” or digital fingerprints to identify and remove terrorist propaganda, those efforts have been limited by the legal framework created by Section 230, leading many to call for more.

Indeed, while Section 230 has provided online platforms with legal immunity from liability for user-generated content, there are those who argue that the provision has enabled online platforms to shirk responsibility for harmful content, including terrorist propaganda and recruitment materials. Simply put, more and more people—including the Biden administration—want to chip away at Section 230 to make it easier for victims of terrorism to hold online platforms accountable for their role publishing harmful material and in purportedly facilitating terrorist activity.

In fact, in recent years, there have been a number of lawsuits and legal actions brought against online platforms for their role in spreading terrorist propaganda and recruitment materials. While many of these lawsuits have been dismissed under the legal framework created by Section 230, some have been successful, including most recently, a family’s case against YouTube that the United States Supreme Court is set to hear this week. In that case (Gonzalez v. Google LLC), a family is seeking to hold YouTube liable for aiding and abetting ISIS’ 2015 mass murder of 129 people in Paris (including their daughter).

The question is whether or not such efforts (via lawsuits or an amendment to the CDA) will do more harm than good.

PREVENTING CENSORSHIP v. PREVENTING TERRORISM

There is an ongoing tension between the need to protect free speech and the need to prevent the dissemination of harmful material, including terrorist activity, online. While online platforms have a responsibility to remove terrorist content, they also have a responsibility to protect the free speech rights of their users. This creates a difficult balancing act for online platforms and governments, as they seek to combat terrorist activity while also protecting the rights of users.

One possible solution to this tension proffered by some supporters of limiting Section 230’s reach is to create an exception to Section 230 for terrorist content. This would enable law enforcement agencies and victims of terrorism to hold online platforms accountable for their role in facilitating terrorist activity, while also protecting the free speech rights of users. For that reason, any reform to Section 230 must be carefully crafted to avoid unintended consequences. For example, creating an exception to Section 230 for terrorist content could lead to increased censorship and self-censorship, as online platforms seek to avoid legal liability. It could also lead to a reduction in the quality of online discourse and the exchange of ideas and information.

POTENTIAL CONSEQUENCES OF STRIKING DOWN (OR WEAKENING) SECTION 230

Despite the clear benefits of Section 230, there are some who argue that the provision has enabled online platforms to shirk responsibility for harmful content posted on their sites. They argue that websites such as Facebook, YouTube, and Twitter should be held accountable for the spread of hate speech, harassment, misinformation, and terrorist propaganda on their platforms. If, in fact, the Supreme Court were to strike down (or otherwise substantially limit) Section 230, it would have far-reaching consequences for the internet as we know it. Some possible outcomes, for example, might include the following:

  • Consolidation of Online Platforms: If Section 230 were struck down, online platforms could face increased legal liability for user-generated content. This could make it more difficult for new and smaller websites to compete with established platforms such as Facebook and Twitter, which have the vast resources—i.e., billions of dollars to manage legal risks. As a result, we could see a consolidation of online platforms, with a few dominant players controlling most of the online conversation. This would, of course, not only stifle healthy competition in the market place, but it could also have a chilling effect on free speech by leaving the power in the hands of a few large companies.
  • The Rise of Alternative Platforms: On the other hand, if Section 230 were struck down, we could see the rise of alternative platforms that are designed to host user-generated content without fear of legal action. These platforms could be more focused on free speech and less concerned with moderation and content removal. This could lead to a splintering of the online conversation, as different communities congregate on different platforms.
  • Changes in Online Business Models: If Section 230 were struck down, online platforms would have to change their business models to manage the legal risks of hosting user-generated content. This could involve new forms of content moderation, such as more aggressive filtering or pre-moderation of content. It could also involve changes to how platforms generate revenue, as they may need to shift away from advertising-based models to reduce their legal risks.
  • Impact on Online Privacy. If Section 230 were struck down, online platforms could face increased pressure to collect and retain user data to manage legal risks. This could lead to greater scrutiny of online privacy, as users become more aware of the risks associated with sharing personal information online. It could also lead to increased regulation of online platforms, as governments seek to protect users from the risks associated with the collection and use of personal data. [How this might play into California’s Privacy Rights Act is a discussion for another time. In the meantime, if you’re interested in reading my article on the CPRA, click here.]
  • Changes in Online Discourse: If Section 230 were struck down, the nature of online discourse would almost certainly change. Online platforms may become more hostile and less open to diverse perspectives, as websites become more cautious about hosting controversial or risky content. This could lead to a decline in the quality of online discourse, as well as a reduction in the exchange of ideas and information on the internet.

CONCLUDING THOUGHT

Ultimately, the issue of Section 230 and anti-terrorism efforts is complex, and it will require a balanced and nuanced approach to address. Online platforms have a responsibility to remove terrorist content and prevent the spread of harmful materials, while also protecting the free speech rights of their users. At the same time, governments and law enforcement agencies have a responsibility to prevent terrorist activity and hold those who facilitate it accountable.

How the Supreme Court sorts all of this out remains to be seen.