In May, two journalism professors — Daxton Stewart of Texas Christian University, who’s also an attorney, and Jeremy Littau of Lehigh University — published “The Right to Lie with AI?” The article focuses on the First Amendment issues that arise as a result of state laws that seek to curb false political speech utilizing various forms of deepfakes.
Since 2019, there have been 22 laws enacted nationwide to regulate political deepfakes, according to Public Citizen, a nonprofit consumer advocacy organization that tracks such legislation.
In their exploration of deepfake legislation, Stewart and Littau found countervailing “strong First Amendment protection for false political speech that may thwart most legislative efforts beyond mandatory disclaimers.”
In an interview with First Amendment Watch, Stewart and Littau discuss the difficulty of enforcing this type of legislation and the need for the involvement of tech companies in regulation efforts. Both agree that mandating the labeling of AI-generated content is the most effective way to police false political speech without running afoul of the First Amendment.
Editor’s note: This interview has been edited and condensed for length and clarity.
FAW: Can you describe the intersection of the First Amendment protections of political speech and anonymous speech with the threats of political deepfakes?
Stewart: Some of the laws that were starting to come out around 2018, 2019, some of the proposals, they all have huge First Amendment problems. I think every First Amendment scholar who looks at these recognizes that. Efforts to legislate political speech are immediately going to set off red flags, because again, we generally resist the government interfering with speech at all, much less especially political speech, which is a core protection of the First Amendment. Particularly after U.S. v. Alvarez came down, the Supreme Court refused to create a new category of unprotected speech for lies, even deliberate lies. And so when a new wave of these laws came out, I couldn’t help but to think looking at them, “Are these, if challenged in court, going to pass First Amendment scrutiny?” Nobody’s been prosecuted under the laws that had been passed, and as far as I could tell, nobody has challenged these on First Amendment grounds. And when they have been challenged, it’s been more in the revenge porn area. Revenge porn is a lot more about privacy and a whole different body of law, and courts have generally been pretty accepting of that, and have found the First Amendment is not violated with those kinds of laws about your privacy, your personal integrity and that sort of thing. But political speech is a different thing. We’re talking about false speech and political ads and campaigns. That has really, really strong protection and has for decades. So I saw the intersection of these being problematic. If they ever were to be challenged — that’s what really kind of inspired this paper — what would courts likely say about it? And then, as we had this wave of laws come up, really last year, and particularly this spring, where 20 or 30 different legislatures were considering them, and about [15] passed them in various iterations, you could see the potential challenges coming, or the potential for people to use these laws in a live election this summer and this fall, and it going to court pretty quickly because of the First Amendment issues embedded in it.
Littau: One thing I’d add to that is that this is happening against a backdrop of discussion around Section 230. This falls on tech companies, who are under increasing scrutiny from pretty much all sides of the political spectrum about their role in moderated content. I would use “moderation,” they use the word “censorship.” I think that that’s not really what’s going on, but from a political rhetoric point of view, that is what they’re saying is happening. And so these tech companies have been loath to really engage in heavy moderation. You’ve seen a real pullback in the last 18 months after Elon Musk gutted his trust and safety team at Twitter and everybody decided that was a great idea. These companies have really pulled back on moderation. So if the government is not allowed to save us, and the tech companies are feeling restrained, you kind of envision a world where what happens then? Does this really fall on the average consumer to have to pick through these different types of content? So I think the legislatures may be feeling like they’re kind of pinched here because they don’t want to leave it to the tech companies, but I’m not entirely sure either one’s going to be allowed if legislators get their way.
FAW: The Supreme Court in U.S. v. Alvarez in 2012 issued a decision protecting false speech, and was wary of adding another category of unprotected speech under the First Amendment. The speech that was considered, however, was shared by word-of-mouth. What do you think of the expanded application of Alvarez on deepfakes that can be circulated quickly and seen by millions of people?
Stewart: It’s interesting. And this is one of the parts that made this paper interesting to look into is that we’re at a time where the current Supreme Court does not seem to be real shy about ditching even recent precedents to change the law on us. And so I think everything’s on the table. And Alvarez was a pretty precarious decision as it was. It was a plurality, four-judge lead opinion. And only a few justices are still on the court from that decision, and I can see why it’s not super popular, even though the justices who are there generally agreed with the majority here. The thing to remember about Alvarez is it was not specifically a political speech case. [Xavier Alvarez] was talking at a public meeting. But really the Stolen Valor Act was about a federal law protecting the integrity of medal winners and honors issued by the military, whereas, if that had been a much narrower law, one that said, for example, you can’t lie about having earned military honors for commercial gain, it would have fallen under a different exception. I think that law probably passes scrutiny because it would be false advertising otherwise, which is not protected. But the court here just said, “Hey, we’re not creating a whole new, different category for lies” even in public places or the law was so broad it could be in private places as well. What I found interesting is the cases that came after. So how did courts interpret Alvarez on these political false speech cases? [Multiple courts] struck down [false political speech] laws based on Alvarez. These laws that had been good law for decades. They said “Alvarez changed the game. We’re applying strict scrutiny to these laws, and even though there is a compelling government interest in ensuring accurate information in elections. These laws were way too broad as drafted, and would actually exacerbate the problem, or could cause more problems through political gamesmanship.” And so case after case has been pretty consistent on that. And I imagine if somebody were to challenge any of these laws that would be the tack they would take. They would say, “Look, Alvarez came down and the Sixth Circuit, Eighth Circuit, other courts have been uniform in saying, you can’t ban false political speech, and the proper remedy is counterspeech.” And I think most of the justices, if they hold to their past, the court would probably agree with that. And who knows? I’m not going to make any predictions, because who knows what this court might decide. I think that the first time somebody actually gets prosecuted under any of these state laws or feels like their ability to campaign or to issue things has been restricted, somebody’s going to go to court over it.