Congress, the White House and the US Supreme Court are all turning their attention to a federal law that has long served as a legal shield for online platforms.
This week, the Supreme Court is set to hear oral arguments on two critical cases involving online speech and content moderation. Central to the arguments is “Section 230,” a federal law that has been heavily criticized by both Republicans and Democrats on various grounds but has been defended by tech companies and digital rights groups as critical to a functioning internet.
The 27-year-old statute has been cited by technology companies involved in the litigation as part of an argument for why they should not have to face lawyers alleging that they provided information, substantial assistance to acts of terrorism by hosting or algorithmically recommending terrorist content.
A series of rulings against the tech industry could significantly narrow Section 230 and its legal protections for websites and social media companies. If that happens, the Court’s decisions could expose online platforms to a new range of laws regarding how they present content to users. Such a result represents the most consequential limits ever placed on a legal shield that has come before today’s largest social media platforms and has enabled them to dismiss many relevant lawsuits.
And there may be more to come: the Supreme Court is still considering whether to hear several additional cases with implications for section 230, and members of Congress have shown renewed enthusiasm for expanding the law’s protections for websites. give back, and President Joe Biden asked for the same thing. in a recent op-ed.
Here’s everything you need to know about Section 230, the law known as “the 26 words that created the internet”.
Passed in 1996 in the early days of the World Wide Web, Section 230 of the Communications Consortium Act was intended to foster start-ups and entrepreneurs. The text of the legislation recognized that the internet was in its infancy and there was a risk that it would be enforced if website owners could be sued for things posted by others.
One of the law’s architects, Democratic Oregon Senator Ron Wyden, has said that without Section 230, “all online media would have to face a bad-faith legal assault and pressure campaigns from the powers that be” trying to silence them.
He added that Section 230 gives websites direct power to remove content they believe is objectionable by creating a “good Samaritan” safe harbor: Under Section 230, websites have immunity to moderate content in any way they see fit — not by other people’s choices — although the federal government can still sue platforms for violating criminal or intellectual property laws.
Contrary to what some politicians have claimed, the protections of Section 230 are not dependent on a platform being politically or ideologically neutral. The law also does not require a website to be classified as a publisher to “qualify” for liability protection. Other than meeting the definition of “interactive computer service,” websites don’t need to do anything to reap the benefits of Section 230—they apply automatically.
The central provision of the law is that websites (and their users) cannot legally be treated as publishers or speakers of other people’s content. In plain English, this means that any legal responsibility for the publication of a particular piece of content rests with the person or entity that created it, not the platforms on which the content is shared or the users who reshare it. .
The simple language contained in Section 230 belies its great impact. Courts have repeatedly adopted Section 230 as a defense against defamation claims, negligence and other allegations. In the past, he has defended AOL, Craigslist, Google and Yahoo, building a body of law so broad and influential that it is considered the backbone of today’s internet.
“The free and open internet as we know it could not exist without Section 230,” wrote the Electronic Frontier Foundation, a digital rights group. “Important court rulings on Section 230 have stated that users and services cannot be sued for forwarding email, hosting an online review, or sharing photos or videos that others find objectionable. It also helps quickly resolve lawsuits that have no legal basis.”
In recent years, however, critics of Section 230 have increasingly questioned the scope of the law and proposed restrictions on the circumstances in which websites could invoke the legal shield.
Over the years, much of the criticism of Section 230 has come from conservatives who say the law allows social media platforms to suppress rightful views for political reasons.
By protecting platforms’ freedom to moderate content as they see fit, Section 230 shields websites from lawsuits that could result from such opinion-based content moderation, though social media companies have said that instead they make content decisions based on ideology. regarding violations of their policies.
The Trump administration tried to turn some of those criticisms into concrete policy that, if successful, would have significant consequences. For example, in 2020, the Department of Justice released a legislative proposal for changes to Section 230 that would create an eligibility test for websites seeking the law’s protections. That same year, the White House issued an executive order asking the Federal Communications Commission to interpret Section 230 more narrowly.
The executive order addressed several legal and procedural problems, notably that the FCC is not part of the judicial branch; does not control social media or content moderation decisions; and that it is an independent agency that does not, by law, take direction from the White House.
While Trump-era efforts to curtail Section 230 have failed, conservatives are still looking for opportunities to do so. And they are not alone. Since 2016, when the role of social media platforms in the spread of Russian election disinformation opened a national dialogue about the companies’ handling of toxic material, Democrats have increasingly opposed Section 230.
By protecting the freedom of platforms to moderate content as they see fit, Democrats have said, Section 230 allowed websites to escape accountability for hate speech and misinformation that others have identified as objectionable but not companies may or may not remove social media. themselves.
The result is a bipartisan hatred of Section 230, even if both parties can’t agree on why Section 230 is flawed or the policies that might properly replace it.
“I would be willing to bet that if we were to vote on a simple repeal of Section 230, it would clear this committee with almost every vote,” Democratic Senator Sheldon Whitehouse of Rhode Island said at a hearing last week of Judiciary the Senate. Committee. “The problem is, when we move down, we need 230 plus. We want to recall 230 and then ‘XYZ.’ to have And we don’t agree on what the ‘XYZ’ are.”
The impasse has driven much of the momentum for changing Section 230 to the courts – in particular, the US Supreme Court, which now has the opportunity to dictate how long the law extends.
Technology critics have called for additional legal disclosure and accountability. “The massive social media industry has largely grown out of the courts and the normal development of a law firm. It is highly irregular for a global industry of enormous influence to be shielded from judicial inquiry,” the Anti-Defamation League wrote in a Supreme Court brief.
For the tech giants, and even for many of Big Tech’s fiercest competitors, it’s a bad thing, because it would undermine what made the internet possible. It could unknowingly and suddenly put many websites and users in legal jeopardy, they say, and dramatically change the way some websites operate to avoid liability.
Social media platform Reddit argued in a Supreme Court brief that narrowing Section 230 so that its protections do not cover site recommendations of content that a user might enjoy would “significantly expand the reach of users Internet to be sued for their online. interactions.”
“‘Suggestions’ are what make Reddit a lively place,” wrote the company and several volunteer Reddit moderators. “Users vote and downvote content, thereby determining which posts gain prominence and fade into obscurity.”
People would stop using Reddit, and moderators would stop volunteering, the brief argued, under a legal regime where there is “a serious risk of being sued for a defamatory or other cruel post created by someone else to ‘praise’.
While this week’s oral arguments won’t be the end of the Section 230 debate, the outcome of the cases could lead to very significant changes the internet has never seen before — for better or worse.