Vanderbilt’s Jacob Mchangama on the First Amendment Implications of Generative AI

jacob mchangama headshot
Jacob Mchangama, the founder and executive director of The Future of Free Speech at Vanderbilt University. Photo courtesy of Jacob Mchangama
By

The rise of generative artificial intelligence has led to questions about its First Amendment implications — like its use by journalists or its application in defamation law — but it remains unclear how the nation’s courts will consider its potential impacts on the marketplace of ideas. 

The technology itself does not have rights, but humans utilize technology – like radio, television and the internet – for their own expression.

The Foundation for Individual Rights and Expression has stated that because of these prior applications of First Amendment protection, that “the use of artificial intelligence to create, disseminate, and receive information should be protected.” 

“Any government restriction on the expressive use of AI needs to be narrowly tailored to serve a compelling governmental purpose, and the regulation must restrict as little expression as is necessary to achieve that purpose,” FIRE wrote.

First Amendment Watch spoke with Jacob Mchangama, the founder and executive director of The Future of Free Speech at Vanderbilt University, about artificial intelligence and his latest piece in TIME magazine, “The Future of Censorship Is AI-Generated.” Mchangama discussed how defamation cases may arise from generative AI, his concerns over government involvement in censoring certain AI-generated content, and the importance of skepticism when using AI and evaluating its capabilities.

Editor’s note: This interview has been edited and condensed for length and clarity.

FAW: Why do you believe that AI-generated content could lead to increased censorship of ideas? 

JM: Because what we see is that there’s an enormous pressure to create guardrails to filter information. And as generative AI is being integrated into other forms of products like search, email and word processing, those guardrails could potentially have huge influence on what can be accessed, and even filtering things out that we’re not even aware of as human beings.

FAW: Do you think AI-generated content will lead to increased First Amendment scrutiny, or has it already?

JM: No doubt that this will be a key issue for future First Amendment jurisprudence, and I’m sure that there are cases already pending that will have a huge bearing. Of course, I think the easy case is sort of where the government imposes an obligation on deployers or developers to sort of filter specific information because that interferes with a human user’s ability to access information, which forms part of the First Amendment. Obviously, an AI system does not have the same rights as a human being, but hindering the access to information, I think, is a potential fruitful avenue for when it comes to the First Amendment. But then there are all kinds of other issues such as Section 230, where I’m not an expert, but things like that will have to be determined going forward.

FAW: A Georgia talk-show radio host sued OpenAI, the company that owns ChatGPT, for libel in June after the artificial intelligence chatbot shared false information about the host to a journalist. Should AI-generated content have press protections? How would that play out in defamation cases?

JM: It’s a very interesting case when it comes to defamation. And I guess we’ll see some test cases pretty soon coming out. If these systems completely replaced search, and then you search for a certain person and it provides defamatory content, and I guess it depends on the specific nature of what is returned and what is written, but it might be in some cases where it should be obvious to a reasonable reader that what it returns is not true. And also is it consistently getting things wrong? Or is it just like a one-off hallucination? Because if I search on my own name and GPT says I’m a mass murderer, but if someone else does it, and it doesn’t return that, well then it might not be as much of a problem than if it did it consistently for anyone who tried to look me up. Those are some hard cases where I don’t think I have a great answer to where the lines should be drawn and who should be liable. But I guess what I would say is that I think human beings should instill a degree of skepticism, that we don’t treat AI systems as oracles and truth machines. They’re pieces of technology that are a work in progress in some ways. They can do incredible things, and they’re likely to get better, but if I prompt GPT about something, and it returns with a reply, I shouldn’t treat it as being the capital “T” truth. It should augment, not replace human reasoning. So it might help me in my research, but ultimately, I think human beings should do more to vet various types of information, follow up with critical questions where the system might itself admit that, “Oh, yes, I actually missed something,” or use alternative sources.

chatgpt

OpenAI and ChatGPT logos are seen in this illustration taken, February 3, 2023. (Reuters/Dado Ruvic)

FAW: Where do you think the guardrails should be? Should AI-generated content have the same protections as other types of expression under the First Amendment?

JM: Well, obviously the companies have the rights to adopt whatever guardrails they want for themselves. It becomes a First Amendment problem if the government were to say, “You have to create guardrails that filter out criticism of the government or specific information that the government deems problematic.” So, that would be the First Amendment problem. I think a larger problem which may not be a First Amendment issue as such is whether, for instance, if the future is one where you have a number of dominant players in the generative AI field and their products have guardrails that filter inconvenient information and that is increasingly being incorporated into all kinds of modes that mediate how human beings navigate the digital landscape. Again, from search to emails, word processing to just ordinary chatbot sessions. And I think it’s ironic that in many ways, current chatbots seem to have guardrails that are more restrictive than social media platforms. Because if I interact with an AI system, it’s not public. So if the AI system was to generate content that is offensive, or discriminatory, it would not be public until I post it somewhere, whether if I post it online or on social media, it’s just me interacting with an AI system. So in that sense, I think that from a free speech and access to information point of view, I think the case is for fewer restrictions when it comes to generative AI, than compared to social media platforms that are outward facing.

FAW: Do you think there is a point at which the government may get involved with AI creators to stifle certain viewpoints? Is this similar to issues arising currently with freedom of speech on social media platforms?

JM: To me, it seems like the potential harms of generative AI, and AI in general, I think one thing is sort of the Chinese model of high tech surveillance and censorship, matching AI with facial recognition, biometric data, and so on. Then, of course, there’s the fear of the AI systems becoming so powerful that they outstrip human intelligence and capabilities, but those to me seem very different from concerns about disinformation or hate speech and so on. And to me, I tend to view generative AI as more general purpose. Some people worry a lot about these chatbots, for instance, creating disinformation and that may or may not be a threat to democracy, but if we take that logic to its conclusion, you might also say, “Well, right now, I’m looking at my laptop, and on that I have Word and I have email clients and so on, and that allows me to write and disseminate false information and hate speech at a much quicker rate. I also have various programs that I’m not very proficient in using, but various programs that could help me create graphics and compelling graphs and images and so on, that could help create and disseminate disinformation.” And so should that be a requirement? Should Microsoft be liable, potentially, or include guardrails in Word or Outlook, or should Google create guardrails in Gmail, or Google Docs that safeguards against disinformation and hate speech? I think most people would be creeped out if you said that Word would not allow you to write certain things or Outlook will not allow you to send an attachment with a Word file if it violated its content policies. I think a good question is to which degree is generative AI then very different from these scenarios, especially as generative AI becomes integrated with these products?

Right now if you ask Google, they’ll say that 99.9% of spam is filtered away from your Gmail inbox, and that’s through AI. So, presumably, Google could also use Gemini or other AI tools to say, “Well, emails containing certain information about elections or minorities or whatever, should also be filtered out,” and that to me, is a scary scenario. And it’s not clear to me that if you said that generative AI should not be able to create disinformation, however you define that and whoever gets to define that, because that’s obviously another problem, if that can also be extended as generative AI is incorporated, integrated into email, or word processing or all kinds of other areas where it’ll inevitably lead to. And also, again, it’s not clear to me that we should have an expectation that just as you can use your laptop and its word processing to write literature that can win the Nobel Prize, you can also use it to write crude and primitive stuff. You can use generative AI to prompt it to write things that are brilliant and things that are crude and misleading. But should that ultimately be the responsibility of the company, or should we also have trust in human beings to be able to basically be the ones responsible? We want AI systems to have guardrails to focus on more narrow categories of real and serious harms such as bio weapons, rather than, say, “misinformation.” But when it comes to sort of very subjective categories like hate speech and misinformation, I think it becomes very difficult to have any meaningful guardrails for that, and they will almost inevitably err on the side of over removal, and that’s what sort of we tried to demonstrate in the TIME piece. And you can see that anyone who plays around with Gemini and GPT will see that there’s a lot of things where it refuses your prompts based on very, very broad definitions of harm.

FAW: Some users can report information that chatbots respond with. Do you think this could change the standard of what a “reasonable” person would find inappropriate?

JM: Certainly possible, especially if we just uncritically think of AI systems as truth machines or if we get to a place where we become lazy and uncritical as human beings, because of the ease provided by chatbots, for instance, “Oh, I want to know something about this topic. Can you please provide me information or can you write this up for me?” I think we want to preserve a duty on human beings to preserve some skepticism and do some work for themselves in terms of vetting the information that they are confronted with.