This morning, I spent a few hours researching Section 230 of the Communications Decency Act, considered by many to be the foundational law on internet speech and the center of many political squabbles. I did this to clarify my thinking on the topic and as a prerequisite to truly assessing the many proposals to change the law.
Here is the opening from The Verge’s explainer of the law:
Section 230 of the Communications Decency Act, which was passed in 1996, says an “interactive computer service” can’t be treated as the publisher or speaker of third-party content. This protects websites from lawsuits if a user posts something illegal.
[…]
Sen. Ron Wyden (D-OR) and Rep. Chris Cox (R-CA) crafted Section 230 so website owners could moderate sites without worrying about legal liability.
Writing in my notes, I summarized the above statements like this:
- Internet platforms aren’t liable for user-generated content. They don’t need to remove illegal content.
- Originally passed to encourage sites to moderate content.
Thinking it through, these two bullet points confused me. Why did Ron Wydon and Chris Cox encourage moderation on the internet by writing a law that removes an incentive to do so?
To get clarity on this question, I continued reading explanations of the law. I found this piece by Ben Thompson to be particularly helpful because it describes the legal landscape for internet speech before The Communications Decency Act was signed into law.
Internet Speech Before Section 230
In 1992, CompuServe was sued for defamatory remarks found on its forums. The judge ruled that since CompuServe had no editorial control over the forum content, it was not liable for the content similar to how a public library is not liable for the contents of a book it distributes.
In 1996, Prodigy was sued for libel in content on its own internet forums. This time, the judge ruled that Prodigy was liable for its forum content because the forums had moderators and used content screening software. The court decision considered Prodigy’s content moderation legally equivalent to the editorial control that a newspaper has as a publisher. Prodigy was forced to pay $200 million and a new court precedent was established: moderating internet content in any way makes an internet service liable for all its user-generated content.
The combination of these two court decisions provided no middle ground between a distributor (CompuServe with no moderation) and a publisher (Prodigy with some moderation). To create a video-sharing website like YouTube in 1996, your options were to avoid moderation entirely (not desirable for obvious reasons) or be legally liable for every video (which would flood a service even a fraction the size of YouTube with lawsuits).
Clarifying My Original Confusion
In response to this dilemma, Wyden and Cox wrote Section 230 to specifically allow any internet service to moderate content based on any standard without assuming liability of that content.
With this information, the Electronic Frontier Foundation’s explanation of the law made much more sense to me: > [Section 230] is one of the most valuable tools for protecting freedom of expression and innovation on the internet. […] This comes as somewhat of a surprise, since the original purpose of the legislation was to restrict free speech on the Intenet. Section 230 says that… online intermediates that host or republish speech are protected against a range of laws that might otherwise be used to hold them legally responsible for what others say and do.
The law does not disincentivize internet services from moderation, as I originally thought. Instead, it allows services to moderate content in the first place, enabling more content posted on the internet, and thus more speech expressed in the world.
—
This isn’t very complicated conceptually, but it took an entire morning for me to clearly understand. Because of this, I’m not surprised that the law is frequently misinterpreted by both Democrats and Republicans, including the President’s recent executive order which chaotically (and unconstitutionally) attempts to repeal the law.
By better understanding this issue, I hope I can more effectively participate in the many debates surrounding this issue. The conversation about internet speech is only getting started, so now is the time to prepare.