Eugene Volokh on Section 230, Generative AI and the First Amendment

Mac computer keyboard.
Photo by Aedrian Salazar via Unsplash.

By 

Eugene Volokh, founder of legal blog The Volokh Conspiracy, professor at the UCLA School of Law and fellow at the Hoover Institution, a public policy think tank, is known among other things for his research on the First Amendment and its application to contemporary questions such as online speech and generative AI.

Volokh, who has taught at UCLA for more than 30 years and previously worked as a computer programmer, notably published “Large Libel Models? Liability for AI Output” last summer. The article explores whether the creators of AI programs can be held liable for defamation based on the programs’ output.

This question was briefly discussed by the Supreme Court in its recent decision in Moody v. NetChoice, in which the court abstained from deciding on two controversial social media laws in Florida and Texas but provided a framework that found that moderation of content by social media companies is akin to editorial decisions made by newspapers.

In his concurrence, Justice Samuel Alito noted that platforms are “beginning to use AI algorithms to help them moderate content.” He then raises the question: “Are such decisions equally expressive as the decisions made by humans? Should we at least think about this?”

Eugene Volokh

Headshot of Eugene Volokh.

In an interview with First Amendment Watch before the NetChoice decision, Volokh discusses the importance of Section 230, whether generative AI outputs — or what chatbots relay to user queries — are protected under the First Amendment and whether AI companies can be held liable for defamation based on the information its chatbot may relay.

Editor’s note: This interview has been edited and condensed for length and clarity.

FAW: Section 230 is often described as “The Twenty-Six Words that Created the Internet,” but its application in First Amendment law is often confusing. In layman’s terms, what is Section 230, and why is it important in a free-speech conversation?

EV: Section 230 is complicated, but let’s start with a big picture summary. Let’s say that there is an online platform on which somebody is posting material. Facebook or Twitter or something like that. Section 230 provides that that platform is generally not liable for lawsuits, or for that matter, of criminal prosecutions under state law, based on what its users post. Its users are liable for what they create, but the platform is not liable unless it itself creates the material. So for example, if I tweet something out, and there’s a lawsuit — it could be for libel, because the person about whom I’m tweeting is alleging that the statement is false and defamatory — he can sue me, but he can’t sue X. Likewise with Facebook.

So, another way of summarizing it is that internet companies are generally not liable for material that is posted by other people or created by other people. So that’s true for platforms. It’s also true for other companies. For example, Google is generally not liable for linking to sites in its search results, or even excerpting briefly sites in search results, even if those sites are defamatory. Without Section 230, it would have been a lot harder to have social media platforms, a lot harder to have search engines, because there might be too much of a risk of liability for them. That’s an oversimplification, but that’s kind of a big picture thing. There are some exceptions to Section 230. For example, for intellectual property law, like copyright and trademark law, that platforms are liable if material posted on them is copyright infringing, although only when they’ve been put on notice of the infringement. There are also exceptions for certain kinds of federal criminal law enforcement. So that’s one portion of Section 230. 

A second portion of Section 230 has to do with immunity for platforms, not for their decisions to keep things up, that’s what I’ve just been talking about, but for their decisions to take things down. So if a platform decides to block certain material or remove certain material, it’s also not liable. Now that historically hasn’t been as important, because generally speaking, even without Section 230, by and large, platforms wouldn’t be liable for blocking things or taking things down, because state law generally doesn’t prohibit such removal. But if a state does decide to prohibit platforms from blocking certain materials, Section 230 might — or might not, it’s complicated —  provide immunity from those state laws. So that’s what Section 230 does, and that’s why it’s important. So it provides protection for platforms, gives them a lot of discretion about what to allow and what not to allow, probably helps make modern social media and search engines possible. The same time, it also means that a lot of damage to people’s reputations, people’s privacy and such, is ultimately not remediable, because if somebody is being libeled, say, on Facebook or on Twitter or some such, they often can’t sue the person who’s doing the libeling. That person may be in a different country, that person may be anonymous, the person may not have any money, they can’t sue or threaten to sue in order to try to get the material removed. They can’t sue the social media platform because the social media platform has immunity. So Section 230 obviously has pluses and minuses, and that’s why there’s a lot of debate about it.

FAW: Are generative AI outputs entitled to First Amendment protections? How so? Does the person who receives the response from the chatbot need to publish it for these protections to apply?

EV: Let’s start with Section 230. Let’s say that ChatGPT outputs some material that damages someone’s reputation. There are two lawsuits pending right now against generative AI companies, one against OpenAI and one against Microsoft for libel, claiming that in response to ordinary queries those companies were outputting libelous material. Section 230 means that online companies aren’t liable for material that is created by other people. But the way that generative AI works is it’s generative. It itself creates certain material. It isn’t just quoting material from other sites. It’s not just linking to it the way that Google Search might. Rather, it’s actually assembling this output and then showing it to the users. So the lawsuits against those companies are not for what other people have created. It’s what those companies’ software creates itself, and therefore Section 230 immunity just doesn’t apply. 

Now the First Amendment is a different matter. There is a debate about whether AI output is protected by the First Amendment. So some people say, “Look, the First Amendment is for the speech of people, human beings.” It could also apply to the speech of, say, corporations like the New York Times Company, or some business corporation, but that’s because those corporations consist of people, and anything that’s said by a corporation is actually the speech of a person. So the argument would go, “AI output is not the speech of people, it’s the speech created by software, and therefore there should be no First Amendment protections.” I don’t think that’s quite right for a couple of related reasons. One is that the Supreme Court has made clear that the First Amendment protects speech in part because of the interests not just of speakers, but of listeners, of readers, of viewers. So for example, if a foreign government sends material into the U.S., maybe that foreign government doesn’t have First Amendment rights, but Americans who want to read that material do have such rights. So as a result, the First Amendment protects that. Likewise, the First Amendment protects the rights of people to create their own speech and to gather information in order to create their own speech, and AI output is often a useful means of gathering information. So just as the First Amendment protects the right to photograph public officials, like police officers in public places, as many lower courts have held, the First Amendment would protect the right to use generative AI in order to create material that you yourself could then edit and distribute. And third, I do think that AI companies do exercise their First Amendment rights by crafting their software in a way that outputs certain kinds of messages and not others. We know, and it’s quite controversial, but we know that many AI companies do indeed deliberately try to block the creation and distribution of certain material that they find offensive, and to deliberately create material that they think is good. So they may very well have their own First Amendment rights. So I do think that even though there’s no Section 230 protection, there is First Amendment protection for AI companies. With the First Amendment, there are exceptions to the First Amendment, like for defamation, such as libel. So while Section 230 would preclude liability for defamation, Section 230 doesn’t apply here, and the First Amendment does not preclude liability for defamation. First Amendment coexists with defamation law, at least to a certain degree, so lawsuits against AI companies for defamation would not be blocked either by Section 230 or by the First Amendment, assuming the elements of defamation are properly shown.

chatgpt

OpenAI and ChatGPT logos are seen in this illustration taken, February 3, 2023. (Reuters/Dado Ruvic)

FAW: What about inputs that are entered into chatbots by the users themselves? What if a user asked the chatbot a leading question that defames someone?

EV: So let’s say I pose a leading question to the chatbot, which asserts falsely that someone is guilty of some crime. That itself might not be defamation, because defamation is generally communication to a person of false and reputation damaging information. So if I just communicated to a computer, that’s not defamation. On the other hand, if I know that the computer is using those questions as part of its training data — I know in some situations AI companies use user queries as part of training data, and sometimes don’t — if I know that it’s being used as part of training data, well then maybe there might be liability for defamation for me simply posing that kind of question. But setting that aside, my merely submitting this information to say, ChatGPT isn’t defamation, unless, again, it’s using it as training data. But let’s say it outputs something saying “Yes, yes. What you’re saying is indeed right, and let me elaborate on it.” And then just kind of elaborates and kind of develops more and more of this falsehood. Well, then, yes, if I then distribute that falsehood to the public, I would be guilty of defamation. I would be guilty of libel assuming that the statement is false and reputation damaging, and I know that it’s false. That depends on the particular kind of statement. So in principle, yes, there could be such a lawsuit against me, but my guess is that that’s a lot less likely to happen than the kinds of lawsuits we’re already seeing where the lawsuit is against the AI company and the AI company is distributing material that it itself created.

FAW: If the companies are protected under the First Amendment based on the speech relayed by their chatbots, could that pose any threats to our information environment?

EV: Well, it depends on what you mean by threats to the information environment, right? So, if the supposed threat to the information environment is that the companies are, say, distributing libelous material, well, libel law can step in and impose liability for that and deter such material. Now if the threat to the information environment is that the companies are spreading material that is false, but it’s not about a particular person. Let’s say they’re distributing falsehoods about the government, then, they indeed would not be liable, I think, given the First Amendment protection, because the First Amendment protects falsehoods about the government, just like you or I wouldn’t be liable under the First Amendment, just like newspapers wouldn’t be liable under the First Amendment. One might ask, “Well, isn’t the New York Times a threat to the information environment because it can publish all sorts of falsehoods about the government without the risk of liability?” And the answer is, if it is such a threat, it’s a threat that’s protected by the First Amendment, because the Supreme Court has concluded that allowing liability for falsehoods about the government, so called seditious libel, allowing such liability, whether criminal or civil, is more dangerous than allowing such speech. So likewise, if the threat to the information environment is that they spread ideas that some people think are wrong, for example, they express views that some people disapprove of, about race or about sexual orientation or about gender identity or whatever else, well, that’s a constitutionally protected threat to the information environment if threat to such environment it is. The government generally can say these ideas are threatening and therefore they should be restricted. And I don’t think it can say that with regard to AI companies any more than it could say that with regard to media companies.

A screenshot of Google’s search recommendations when a user types “What is Section 230?”

A screenshot of Google’s search recommendations when a user types “What is Section 230?”

FAW: There has been some controversy over whether computer code, such as what is written to design a website, is protected expression under the First Amendment. Do you believe this debate is important to the question of whether Section 230 protects the “speech” of generative AI?

EV: I don’t think so. There may be some indirect connections in certain ways, but I do think they’re basically two separate debates. The output of generative AI is text of a sort that if to the extent it causes harm, causes harm precisely because of what it communicates. In this respect, it’s very much like something that’s published by you or by me or by The New York Times. Now, when it comes to the debate about code being speech, one of the important differences is that when the government says, “We want to restrict distributing certain code,” it’s not because of what the code communicates, it’s because of what the code does, because when executed, the code causes computers to do certain things. That’s a very different kind of question than a concern that the output of AI companies, when read, persuades humans to do or think certain things.

FAW: The Supreme Court has been inundated with questions about Section 230 and social media. Do you expect questions related to Section 230’s application to generative AI to land on the docket in the near future? How difficult is it for a Supreme Court to grasp this technology and apply the First Amendment principles of Section 230 to it? Technology changes so quickly. Are there risks involved in formulating First Amendment principles around something that is constantly evolving?

EV: There’s always a danger. There’s always a danger courts will get it wrong as to anything. But you know, if there’s a law, whether it’s libel law or Section 230 or the First Amendment, someone’s got to apply it. At some point, that’s going to be the judges. And if someone comes to court and says, “Oh, well, you know, I’m protected from immunity by Section 230,” a judge can’t say, “Oh, you know, I’m not really an expert on computer technology, and it’s ever changing. So I’m just not going to apply Section 230 or I’m not going to interpret Section 230.” No, we have judges who, in our system, are generalist judges, and they’re supposed to make decisions. They’re supposed to make decisions about patent laws, for example, sometimes even juries have to make decisions about patent law, when there are complicated technical questions having to do with biochemistry or with computer technology or whatever else. And we ask them to make these decisions, however imperfectly. So while, of course, it’s possible that in some future case involving Section 230 or AI or First Amendment technology, the judges just won’t understand it, and therefore will end up reaching the wrong result. Their job is to try their best, and they have tools to try to better understand it. They will read the briefs. The parties will try to explain it to them. There could be friend of the court briefs from experts. If it’s a factual question that’s resolved in a trial, there could be testimony by expert witnesses or affidavits submitted by expert witnesses. Judges have to do the best they can.

More on First Amendment Watch: