Commentary and Analysis | Section 230

Electronic Frontier Foundation’s David Greene Weighs In on Section 230 and Online Speech

A screenshot of Google’s search recommendations when a user types “What is Section 230?”

By Susanna Granieri

The Supreme Court will hear oral arguments in two cases pivotal to online speech: Gonzalez v. Google on Feb. 21 and Twitter, Inc. v. Taamneh on Feb. 22. Both cases question the liability of social media platforms and search engines regarding speech hosted on their sites, and if recommendation algorithms could be responsible for aiding terrorist activity.

In 2015, the Islamic State group, also known as ISIS, carried out coordinated terrorist attacks in Paris, one of which killed Nohemi Gonzalez. Her father, Reynaldo, claimed videos posted to YouTube were suggested to users and amplified by Google algorithms to ultimately assist ISIS in recruiting members. He sued the search engine under the Anti-Terrorism Act of 2018.

The question before the court in Gonzalez focuses on Section 230 of the Communications Decency Act of 1996 which states that “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”  

In Gonzalez, the Supreme Court Justices will look at the scope of Section 230 protections. The question before the court asks if Section 230 protects digital platforms that host online speech and use recommendation algorithms to target specific users, or if it only limits the liability of digital platforms “when they engage in traditional editorial functions” like content moderation. This is the first time the Supreme Court will take up the issue.

Twitter v. Taamneh doesn’t necessarily focus on Section 230, but questions whether Twitter, as a social media platform that hosts speech, can be held liable under the Anti-Terrorism Act.

Jordanian citizen Nawras Alassaf was killed during a 2017 ISIS-affiliated attack in Istanbul. The family sued Twitter, Google and Facebook for failing to control and monitor terrorist content shared on their sites, as well as providing a way for the extremist group to create and promote targeted ads that financially benefited the companies.

The court will answer 1) whether Twitter “knowingly” aided terrorism because it didn’t remove or moderate certain content on its platform and “allegedly could have taken more ‘meaningful’ or ‘aggressive’ action to prevent such use” and 2) if Twitter is liable for “aiding and abetting” under the Anti-Terrorism Act even though its “services were not used in connection with the specific ‘act of international terrorism.’”

Both cases have the potential to shape the way online speech currently operates by highlighting the protections of freedom of speech in the digital realm.


First Amendment Watch spoke with David Greene, senior staff attorney and civil liberties director at the Electronic Frontier Foundation, about Section 230 and what these cases may mean for the future of the internet. Greene is an adjunct professor at the University of San Francisco School of Law where he teaches the First Amendment and media law. He was also a founding member of the Internet Free Expression Alliance.

 FAW: Why is Section 230 so important and considered to have “led to the creation of the internet as we know it today”?

DG: The beauty of online communication is it really allows anyone who has internet access in the world to communicate with anyone else who has internet access in the world fairly easily. That can either be one-to-one communications or you could publish to lots of people broadly.

The reason that can happen is because there are numerous levels of intermediaries between the person speaking and the people who receive their speech. It always has been that way because most people don’t have the technical know-how or equipment to do it themselves. Most people don’t know how to code websites or they don’t have a server capacity to deliver content or to maintain publications. We rely on a ton of intermediaries, whether they’re things that seem very technical and deep in the stack, like internet service providers and content delivery networks, domain name registrars and DDoS protection services, things like that; all the way up to things that are more user-end intermediaries, like email services, messaging services, social media and search engines.

There’s all these levels of intermediaries and what Section 230 was a recognition of is that if each of those intermediaries was going to be potentially liable for all of this speech that was flowing through them, then they would either not do it and it wouldn’t be broadly available, or they would have to have some system of reading and checking everything before it went through. That would 1) slow down communications; 2) be very privacy invasive as a lot of people use this not just for mass publication, but also for private communications; and 3) just increase expense and inhibit the development of these services. It was that recognition which led to the passage of Section 230 saying we’re generally, with some significant exceptions, going to immunize these intermediaries to really try to significantly lessen the legal risks they have from carrying other people’s speech. That’s why it’s so important. Were an intermediary to bear liability, then we wouldn’t have them operate the way we do.

FAW: Section 230 is described as “The Twenty-Six Words that Created the Internet,” and legal scholars are concerned for its fate. Do you agree with the concern that these Supreme Court decisions could change the way online speech operates under Section 230 protections?

DG: Yes, I tend to agree with that. It certainly has the potential to really shift the internet from user-generated content to something else because what’s most likely to happen is that the fear of liability will just make it much harder for intermediaries to pass through user-generated content. Obviously it depends on what the Supreme Court does, but at least potentially what they’re being asked to do is to open up the possibility of liability for what are really, really routine and frankly, really necessary functions of the internet in terms of hosting material and how you make people who want to find it aware that you’re hosting it. That’s a function that everybody does, at least all the user-end services do, so whether they’re social media or other sites, review sites or even websites that just have comments is that it’s just going to make that much more difficult. Now, obviously, it depends on exactly how the Supreme Court deals with that, and then maybe if it gets replaced with something of a statutory basis, so there’s still a bunch of variables. Right now the immunity provided by Section 230 is part of the architecture of the internet as we know it, and if that changes, we potentially have to essentially rebuild it.

FAW: Section 230 was passed in 1996 before social media existed, and now there are billions of social media users globally. Does this expansion of social media and digital speech impact the discussions about Section 230 reform?

DG: It’s always a good idea to reexamine whether legislation is still relevant or needs to be fixed. I’m not saying that Section 230 is something that’s written in stone and can never ever be changed, but I do think the concept of intermediary immunity or just very, very limited liability for intermediaries is a really important concept that is probably more relevant now than it was in 1996. Because as you said, the amount of communication is much greater. The number of users globally is much greater now, and even the number of intermediaries is much greater. We didn’t really have social media as a means of mass publication in 1996 the way we do now. We had bulletin boards and message boards and services, like Prodigy and CompuServe, but it wasn’t the same level of adoption as it is now. I agree that it’s always worth looking at things and some things have changed, but I still think that it’s even more important now than it was in 1996 that there be very limited liability for intermediaries.

FAW: How are Gonzalez v. Google and Twitter v. Taamneh similar? Twitter doesn’t specifically mention Section 230, but doesn’t the question of whether Twitter can be held liable for hosting this speech implicate Section 230 in some way?

DG: The cases are similar because they both deal with Anti-Terrorism Act claims. They’re different because the question that the court certified in Gonzalez is about the interpretation of Section 230, so it’ll be the first time the court looks at it. The question the court certified in Twitter v. Taamneh is actually quite different. Twitter actually filed the cert petition in Twitter v. Taamneh because it disagreed with the Ninth Circuit’s interpretation of the Anti-Terrorism Act. If the Supreme Court agrees with Twitter, it might not need to decide the 230 issue in Gonzalez. It could decide Gonzalez on the same Anti-Terrorism Act issue as it does Taamneh. Also, the relationship between the cases is important because one of the important functions of Section 230 is it serves as a procedural protection against claims that are meritless because they’re contrary to the First Amendment because the intermediaries would be protected by the First Amendment even if Section 230 didn’t exist. What Section 230 does is provide a very easy way of resolving these cases without the intermediary and maybe even the speakers having to go through the expense of defending litigation. In that application, Section 230 functions very much like an anti-SLAPP statute might, where you’re just providing a way to identify and have courts resolve free speech cases very quickly. The way that works in Twitter v. Taamneh is that even if Section 230 did not protect Twitter, Twitter still wouldn’t be liable because the question the court is considering is, I think, most likely going to be decided on statutory construction of the Anti-Terrorism Act. I think it’s a necessary interpretation because the First Amendment limits the extent of which a speaker or publisher can be liable because of an attenuated connection with someone else’s speech, and that’s the case here. Twitter’s relationship to the terrorist act is so attenuated that the First Amendment would bar the claim. That’s an example of where Section 230 is useful because it allows for just an easy dismissal of those cases. Again, similar to anti-SLAPP statutes. I think that’s how Twitter v. Taamneh is best seen in the Section 230 realm.

The cases were heard together at the Ninth Circuit, but decided with separate opinions. When Gonzalez filed the cert petition with the Supreme Court, Twitter filed a conditional cert petition that said if you grant Gonzalez, you should also grant this one because it would be important to decide. Twitter had won on the Section 230 issue as did Gonzalez below, but Twitter had lost on the substantial causation issue. The court just found that it could not be resolved on summary judgment. Twitter would not have even petitioned for cert I don’t think had Gonzalez not taken his case up to the court, so that the court would understand that you have a Section 230 issue, but you also have a very related First Amendment issue and that really is the way that Section 230 actually does reflect a lot of either First Amendment law and certainly be an expression of values.

FAW: Gonzalez is focused on the immunization of the platform in regards to its recommendation of content, and most would agree that allowing ISIS recruitment videos on YouTube is problematic. But wouldn’t this then impact the protections of algorithmic recommendations?

DG: You’re always going to find bad use cases. It’s always going to be that way with immunities. It’s a judgment that on balance, and usually on substantial balance, it’s better to immunize even though this means that someone who we don’t like their decision-making is going to be immunized as well. Obviously the court’s interpretation of this statute is going to affect more than just this particular decision by YouTube. I think the broader question about whether Section 230 protects recommendation schemes, or amplification schemes, is a really important issue and I think Section 230 has to protect recommendations and amplification. I think it has to in order to fulfill the purposes of Section 230. Section 230 has been interpreted to really cover all traditional editorial activity, so that anyone who is in the position of being an intermediary could perform any of the traditional editorial activities, again without fear of liability. I think that is the proper interpretation. Recommendation systems and amplification really are, especially recommendation systems, traditional editorial things. It’s not unique to the internet. Certainly the way the internet does it is different than what may have happened otherwise, but it’s very traditional for publishers and other types of speech intermediaries to recommend or to target some content to certain audiences. We see this with newspapers and magazines and where they decide to place articles, what goes on the front page, what gets the big headline, what gets a photo illustration; all these decisions are very intentional. What advertisements go in the sports page, what goes in a different section; all that stuff is all about trying to put your content in the place where it’s most likely to reach the people you think would most want to read it. So that type of targeting or recommendation system is really traditional. Bookstores do this with where they place books, what they decide to put on display in the window or on the table, where they shelf something, which section they would shelf something. That’s just a traditional part of and a really socially productive part of being an intermediary. Section 230 should cover that because I think it is a valuable role for intermediaries to play. I think most people want those services and I do think that was what was contemplated by the law originally as well. 

FAW: If the court sides with Gonzalez, what would that mean for recommendation algorithms? 

DG: It’s hard to imagine an internet without recommendations. I think someone could say well, they don’t have to do things like we recommend you watch this, but everything they do is designed to direct people to content they want to find. In the briefs, Gonzalez argues that they try to distinguish search engines because they’re the users actually affirmatively requesting the information. Many recommendation systems are based on people opting in to receiving recommendations. I think there’s so many things that could be considered and that probably are rightly considered recommendation systems or targeting. It’s hard to imagine what things would look like without it. Someone may be OK with their social media feed just being a purely chronological order thing, but I don’t know if people are OK with their search results being that. Trying to sort through the world’s content would be difficult. How do you create a site that isn’t in some ways recommending content to people who come to your site? It’s hard to imagine. Obviously big publishers will have to adjust to it. It’s really hard to imagine what that would look like. For larger sites, like TikTok, for example, the thing that makes it different from other social media sites is that it recommends content. I think we’ve never had to go to a world where there’s a lot of pre-filtering, upload reviewing of all content before it gets published. Again, there’s lots of reasons why that’s bad.

FAW: In Gonzalez’s petition, it argues that a search engine like Google should receive different protections under Section 230 from a social sharing platform like YouTube. What do you think of this argument?

DG: I don’t think there’s anything in the language of the statute that says they should be considered separate. I find it very difficult to square the argument that Gonzalez presents, trying to distinguish how Google recommends material when it responds to a search inquiry as opposed to how it recommends material as part of YouTube. There’s two problems: 1) I don’t see that in the language of the statute, I don’t think you can make that distinction in the language of the statute and 2) I don’t think it technologically makes any sense. The argument that Gonzalez makes has to do with the fact that when you’re hosting material because you are providing a URL that goes to your own site, then you are actually somehow creating the content. Having a unique URL is just such a central part of how the internet works. I don’t know that it’s a limiting factor. When I read that, what I thought is that they realize that they could lose if they can’t answer the question: Does this mean search engines will be liable for returning social search results? That’s what it read like to me, and they tried to make an argument to distinguish search engines, which I don’t think is a very convincing argument.

Gonzalez v. Google: Petition for Writ of Certiorari

Gonzalez Question Presented to Supreme Court

Twitter, Inc. v. Taamneh: Conditional Petition for Writ of Certiorari

Twitter Questions Presented to Supreme Court


Tags