Eugene Volokh, founder of legal blog The Volokh Conspiracy, professor at the UCLA School of Law and fellow at the Hoover Institution, a public policy think tank, is known among other things for his research on the First Amendment and its application to contemporary questions such as online speech and generative AI.
Volokh, who has taught at UCLA for more than 30 years and previously worked as a computer programmer, notably published “Large Libel Models? Liability for AI Output” last summer. The article explores whether the creators of AI programs can be held liable for defamation based on the programs’ output.
This question was briefly discussed by the Supreme Court in its recent decision in Moody v. NetChoice, in which the court abstained from deciding on two controversial social media laws in Florida and Texas but provided a framework that found that moderation of content by social media companies is akin to editorial decisions made by newspapers.
In his concurrence, Justice Samuel Alito noted that platforms are “beginning to use AI algorithms to help them moderate content.” He then raises the question: “Are such decisions equally expressive as the decisions made by humans? Should we at least think about this?”
In an interview with First Amendment Watch before the NetChoice decision, Volokh discusses the importance of Section 230, whether generative AI outputs — or what chatbots relay to user queries — are protected under the First Amendment and whether AI companies can be held liable for defamation based on the information its chatbot may relay.
Editor’s note: This interview has been edited and condensed for length and clarity.
FAW: Section 230 is often described as “The Twenty-Six Words that Created the Internet,” but its application in First Amendment law is often confusing. In layman’s terms, what is Section 230, and why is it important in a free-speech conversation?
EV: Section 230 is complicated, but let’s start with a big picture summary. Let’s say that there is an online platform on which somebody is posting material. Facebook or Twitter or something like that. Section 230 provides that that platform is generally not liable for lawsuits, or for that matter, of criminal prosecutions under state law, based on what its users post. Its users are liable for what they create, but the platform is not liable unless it itself creates the material. So for example, if I tweet something out, and there’s a lawsuit — it could be for libel, because the person about whom I’m tweeting is alleging that the statement is false and defamatory — he can sue me, but he can’t sue X. Likewise with Facebook.
So, another way of summarizing it is that internet companies are generally not liable for material that is posted by other people or created by other people. So that’s true for platforms. It’s also true for other companies. For example, Google is generally not liable for linking to sites in its search results, or even excerpting briefly sites in search results, even if those sites are defamatory. Without Section 230, it would have been a lot harder to have social media platforms, a lot harder to have search engines, because there might be too much of a risk of liability for them. That’s an oversimplification, but that’s kind of a big picture thing. There are some exceptions to Section 230. For example, for intellectual property law, like copyright and trademark law, that platforms are liable if material posted on them is copyright infringing, although only when they’ve been put on notice of the infringement. There are also exceptions for certain kinds of federal criminal law enforcement. So that’s one portion of Section 230.
A second portion of Section 230 has to do with immunity for platforms, not for their decisions to keep things up, that’s what I’ve just been talking about, but for their decisions to take things down. So if a platform decides to block certain material or remove certain material, it’s also not liable. Now that historically hasn’t been as important, because generally speaking, even without Section 230, by and large, platforms wouldn’t be liable for blocking things or taking things down, because state law generally doesn’t prohibit such removal. But if a state does decide to prohibit platforms from blocking certain materials, Section 230 might — or might not, it’s complicated — provide immunity from those state laws. So that’s what Section 230 does, and that’s why it’s important. So it provides protection for platforms, gives them a lot of discretion about what to allow and what not to allow, probably helps make modern social media and search engines possible. The same time, it also means that a lot of damage to people’s reputations, people’s privacy and such, is ultimately not remediable, because if somebody is being libeled, say, on Facebook or on Twitter or some such, they often can’t sue the person who’s doing the libeling. That person may be in a different country, that person may be anonymous, the person may not have any money, they can’t sue or threaten to sue in order to try to get the material removed. They can’t sue the social media platform because the social media platform has immunity. So Section 230 obviously has pluses and minuses, and that’s why there’s a lot of debate about it.
FAW: Are generative AI outputs entitled to First Amendment protections? How so? Does the person who receives the response from the chatbot need to publish it for these protections to apply?
EV: Let’s start with Section 230. Let’s say that ChatGPT outputs some material that damages someone’s reputation. There are two lawsuits pending right now against generative AI companies, one against OpenAI and one against Microsoft for libel, claiming that in response to ordinary queries those companies were outputting libelous material. Section 230 means that online companies aren’t liable for material that is created by other people. But the way that generative AI works is it’s generative. It itself creates certain material. It isn’t just quoting material from other sites. It’s not just linking to it the way that Google Search might. Rather, it’s actually assembling this output and then showing it to the users. So the lawsuits against those companies are not for what other people have created. It’s what those companies’ software creates itself, and therefore Section 230 immunity just doesn’t apply.
Now the First Amendment is a different matter. There is a debate about whether AI output is protected by the First Amendment. So some people say, “Look, the First Amendment is for the speech of people, human beings.” It could also apply to the speech of, say, corporations like the New York Times Company, or some business corporation, but that’s because those corporations consist of people, and anything that’s said by a corporation is actually the speech of a person. So the argument would go, “AI output is not the speech of people, it’s the speech created by software, and therefore there should be no First Amendment protections.” I don’t think that’s quite right for a couple of related reasons. One is that the Supreme Court has made clear that the First Amendment protects speech in part because of the interests not just of speakers, but of listeners, of readers, of viewers. So for example, if a foreign government sends material into the U.S., maybe that foreign government doesn’t have First Amendment rights, but Americans who want to read that material do have such rights. So as a result, the First Amendment protects that. Likewise, the First Amendment protects the rights of people to create their own speech and to gather information in order to create their own speech, and AI output is often a useful means of gathering information. So just as the First Amendment protects the right to photograph public officials, like police officers in public places, as many lower courts have held, the First Amendment would protect the right to use generative AI in order to create material that you yourself could then edit and distribute. And third, I do think that AI companies do exercise their First Amendment rights by crafting their software in a way that outputs certain kinds of messages and not others. We know, and it’s quite controversial, but we know that many AI companies do indeed deliberately try to block the creation and distribution of certain material that they find offensive, and to deliberately create material that they think is good. So they may very well have their own First Amendment rights. So I do think that even though there’s no Section 230 protection, there is First Amendment protection for AI companies. With the First Amendment, there are exceptions to the First Amendment, like for defamation, such as libel. So while Section 230 would preclude liability for defamation, Section 230 doesn’t apply here, and the First Amendment does not preclude liability for defamation. First Amendment coexists with defamation law, at least to a certain degree, so lawsuits against AI companies for defamation would not be blocked either by Section 230 or by the First Amendment, assuming the elements of defamation are properly shown.