Does the First Amendment Protect Political Deepfakes? Scholars Weigh In

Illustration shows AI (Artificial Intelligence) letters and robot hand miniature
AI (Artificial Intelligence) letters and robot hand miniature in this illustration, taken June 23, 2023. (Reuters/Dado Ruvic/File Photo)

By 

In May, two journalism professors — Daxton Stewart of Texas Christian University, who’s also an attorney, and Jeremy Littau of Lehigh University — published “The Right to Lie with AI?” The article focuses on the First Amendment issues that arise as a result of state laws that seek to curb false political speech utilizing various forms of deepfakes.

Since 2019, there have been 22 laws enacted nationwide to regulate political deepfakes, according to Public Citizen, a nonprofit consumer advocacy organization that tracks such legislation.

In their exploration of deepfake legislation, Stewart and Littau found countervailing “strong First Amendment protection for false political speech that may thwart most legislative efforts beyond mandatory disclaimers.”

In an interview with First Amendment Watch, Stewart and Littau discuss the difficulty of enforcing this type of legislation and the need for the involvement of tech companies in regulation efforts. Both agree that mandating the labeling of AI-generated content is the most effective way to police false political speech without running afoul of the First Amendment.

Editor’s note: This interview has been edited and condensed for length and clarity.

FAW: Can you describe the intersection of the First Amendment protections of political speech and anonymous speech with the threats of political deepfakes?

Daxton Stewart

Daxton Stewart

Stewart: Some of the laws that were starting to come out around 2018, 2019, some of the proposals, they all have huge First Amendment problems. I think every First Amendment scholar who looks at these recognizes that. Efforts to legislate political speech are immediately going to set off red flags, because again, we generally resist the government interfering with speech at all, much less especially political speech, which is a core protection of the First Amendment. Particularly after U.S. v. Alvarez came down, the Supreme Court refused to create a new category of unprotected speech for lies, even deliberate lies. And so when a new wave of these laws came out, I couldn’t help but to think looking at them, “Are these, if challenged in court, going to pass First Amendment scrutiny?” Nobody’s been prosecuted under the laws that had been passed, and as far as I could tell, nobody has challenged these on First Amendment grounds. And when they have been challenged, it’s been more in the revenge porn area. Revenge porn is a lot more about privacy and a whole different body of law, and courts have generally been pretty accepting of that, and have found the First Amendment is not violated with those kinds of laws about your privacy, your personal integrity and that sort of thing. But political speech is a different thing. We’re talking about false speech and political ads and campaigns. That has really, really strong protection and has for decades. So I saw the intersection of these being problematic. If they ever were to be challenged — that’s what really kind of inspired this paper — what would courts likely say about it? And then, as we had this wave of laws come up, really last year, and particularly this spring, where 20 or 30 different legislatures were considering them, and about [15] passed them in various iterations, you could see the potential challenges coming, or the potential for people to use these laws in a live election this summer and this fall, and it going to court pretty quickly because of the First Amendment issues embedded in it.

Jeremy Littau

Jeremy Littau

Littau: One thing I’d add to that is that this is happening against a backdrop of discussion around Section 230. This falls on tech companies, who are under increasing scrutiny from pretty much all sides of the political spectrum about their role in moderated content. I would use “moderation,” they use the word “censorship.” I think that that’s not really what’s going on, but from a political rhetoric point of view, that is what they’re saying is happening. And so these tech companies have been loath to really engage in heavy moderation. You’ve seen a real pullback in the last 18 months after Elon Musk gutted his trust and safety team at Twitter and everybody decided that was a great idea. These companies have really pulled back on moderation. So if the government is not allowed to save us, and the tech companies are feeling restrained, you kind of envision a world where what happens then? Does this really fall on the average consumer to have to pick through these different types of content? So I think the legislatures may be feeling like they’re kind of pinched here because they don’t want to leave it to the tech companies, but I’m not entirely sure either one’s going to be allowed if legislators get their way.

FAW: The Supreme Court in U.S. v. Alvarez in 2012 issued a decision protecting false speech, and was wary of adding another category of unprotected speech under the First Amendment. The speech that was considered, however, was shared by word-of-mouth. What do you think of the expanded application of Alvarez on deepfakes that can be circulated quickly and seen by millions of people?

Stewart: It’s interesting. And this is one of the parts that made this paper interesting to look into is that we’re at a time where the current Supreme Court does not seem to be real shy about ditching even recent precedents to change the law on us. And so I think everything’s on the table. And Alvarez was a pretty precarious decision as it was. It was a plurality, four-judge lead opinion. And only a few justices are still on the court from that decision, and I can see why it’s not super popular, even though the justices who are there generally agreed with the majority here. The thing to remember about Alvarez is it was not specifically a political speech case. [Xavier Alvarez] was talking at a public meeting. But really the Stolen Valor Act was about a federal law protecting the integrity of medal winners and honors issued by the military, whereas, if that had been a much narrower law, one that said, for example, you can’t lie about having earned military honors for commercial gain, it would have fallen under a different exception. I think that law probably passes scrutiny because it would be false advertising otherwise, which is not protected. But the court here just said, “Hey, we’re not creating a whole new, different category for lies” even in public places or the law was so broad it could be in private places as well. What I found interesting is the cases that came after. So how did courts interpret Alvarez on these political false speech cases? [Multiple courts] struck down [false political speech] laws based on Alvarez. These laws that had been good law for decades. They said “Alvarez changed the game. We’re applying strict scrutiny to these laws, and even though there is a compelling government interest in ensuring accurate information in elections. These laws were way too broad as drafted, and would actually exacerbate the problem, or could cause more problems through political gamesmanship.” And so case after case has been pretty consistent on that. And I imagine if somebody were to challenge any of these laws that would be the tack they would take. They would say, “Look, Alvarez came down and the Sixth Circuit, Eighth Circuit, other courts have been uniform in saying, you can’t ban false political speech, and the proper remedy is counterspeech.” And I think most of the justices, if they hold to their past, the court would probably agree with that. And who knows? I’m not going to make any predictions, because who knows what this court might decide. I think that the first time somebody actually gets prosecuted under any of these state laws or feels like their ability to campaign or to issue things has been restricted, somebody’s going to go to court over it.

chatgpt

OpenAI and ChatGPT logos are seen in this illustration taken, February 3, 2023. (Reuters/Dado Ruvic)

FAW: What about defamation lawsuits against companies that create AI-generated chatbots, such as the libel lawsuit filed in June 2023 against OpenAI, in which a Georgia talk-show radio host sued after ChatGPT shared false information about the host to a journalist?

Stewart:  It’s interesting you mentioned defamation, because classically, that’s what would happen here. You have the political gamesmanship of a defamation lawsuit filed in the middle of a campaign. It’s definitely happened. Which is “Somebody said something false about me in a campaign ad, and it hurt my reputation, so I am filing a lawsuit.” They file the lawsuit. They have a press conference about the lawsuit, and this is how they kind of get attention and try to go debunk the statement about them. And generally, those lawsuits don’t amount to anything. They usually end up settled or dropped after the campaign. But that splash they get during the campaign, which is to go out and say this person has libeled me, and it’s so bad, I’m going to file a lawsuit. 

Littau: And that example of the threatened defamation lawsuit that’s really for the purpose of publicity is probably the area where Daxton and I have disagreed a little bit as we’ve written in this piece. I’m really not confident that our press system is well-resourced enough to really be able to do it the way we’ve done in the past. I mean, if the solution that the justices come back with is that counterspeech is the solution and the remedy here, then we have to think about the information environment in which the average citizen lives in. Where you’ve got the loss of 2,500 newspapers since 2005 in the United States, the death of local news, where I think probably the most damage can be done in an election, in these small races that depend on a local media outlet to really debunk claims and to really have resources to be able to investigate the provenance of AI-generated content. So how does counterspeech really make its way into the ecosystem when you’ve got, what we have seen, coordinated efforts by people who are producing AI and then using bot activity and bad actors to amplify these things so that they overwhelm social networks. And so what you encounter in the trending topics becomes the truth for a lot of folks, because they don’t have somebody there to point to the counterspeech and say something. I think that this is as much a media problem as is a speech problem. We’ve got a really weakened immune system in our press that even if you had the ability to investigate a particular piece of content and be able to point the way to say “This is how it’s created, this is how we know it’s fake,” how does that information get out to the public? Unless the tech companies are playing ball here. So we come back to the real question, are they loath to be part of the solution? And if they want to be hands off about this content. It’s not enough to add a community note to a tweet to debunk something. So how does that work exactly? I think that the traditional remedy of “the solution to bad speech is good speech,” great, but does that work in an environment where local media is really suffering, the press environment where you’ve got almost 40% of Americans saying they don’t trust what’s coming out of national media on a daily basis, does that counterspeech gain traction? How does it even get diffused in a way that’s useful for the public? And so that’s where I’m probably more pessimistic than Daxton on this one. Because as I believe in free speech, it’s a real thorny problem like, “OK, so what does the remedy look like if we’re going to make speech the solution?” And that comes back to issues that have nothing to do with free expression. It has everything to do with the health of our media ecosystem. And the fact that social networks are increasingly replacing journalism as an information source for people, means that we can reckon with that reality. We need to make sure that these networks, that are compelled to moderate and fact check, provide the kind of counterspeech to make sure it surfaces to people’s range of vision.

 


FAW: Would you consider the “community notes” option on X or flags of inaccurate information on Meta a form of “counterspeech”? What if a post is flagged as inaccurate or AI-generated after it’s become viral? Is it too late?

Stewart: Those are challenges too. There’s the “who watches the watchmen?” Who’s running the trust and safety teams? And if they’re pulling things down for political reasons or political considerations, that’s not just trying to put a finger on the scale for a political party, for example, but instead saying, “We want those regulators off our back,” or, “We want those senators off our back.” And we’re seeing a lot of that right now with the shutdown of the Stanford Internet Center. The sorts of places that are doing this kind of work have been the subject of political attacks for doing it. I think that the laws that exist don’t particularly have a great remedy for that. 

FAW: If the First Amendment precludes serious limitations on deepfakes, or the creation of another category of unprotected speech, what tools are available to combat it that would be consistent with the First Amendment?

Stewart: I think the mandatory labeling is probably the one most likely to pass constitutional scrutiny, which is that if you’re using AI or any deepfake or anything like that, you have to label that it is that, and a number of the laws incorporate that. That makes sense to me. I think that would pass constitutional scrutiny. It’s just like the labeling of any other political ad, like the sponsorship, where it comes from, that sort of thing, that’s consistent with Citizens United, that’s consistent with Buckley v. Valeo. So I think of the potential tools these state laws use that’s the most likely to withstand court scrutiny. 

Littau: What we have to get our head around is a taxonomy of AI and how that works, what the outlooks look like, because I think it’s going to be very difficult to slap any sort of regulation on any of this until we [understand AI]. ChatGPT is interesting because it’s generative, right? You can ask for it to generate something in its voice, but what do you do about a company like Perplexity AI which is basically summarizing the news? So what happens if it’s generative AI content and spitting out nonsense, but reporting it to be true, then are you a platform? Are you a publisher? What are you in that sense? So separating out who it is who’s creating these things, and what they’re generating, and then what the context of that product is, I think, are all kinds of thorny issues you’re going to need to pull apart a little bit before you figure out who and what you’re going to regulate. It’s easier to do it probably at the creator platform level, like taking the AI as having a consistent set of rules. Because algorithms are technically AI, right? I mean, are we going to get knee deep into defining AI, such that we’re going to start moderating these algorithms? Or is it we’re only going to look at generative output as something we’re going to take a swing at? I think that separating out those different facets of AI and how they’re being used and what factors are involved is going to be really important. Tort law probably offers a type of remedy here. But tort reform is not going to fix an election misinformation campaign, right? A successful tort lawsuit could stop an AI company from allowing that going forward, but the damage is already really done at that point. 

I think any sort of solution has to be forward thinking. We have to anticipate the see-around-corners for things that are potential problems and try to solve those before they surface, because otherwise, I think that the damage to our institutions could be too great. One of the things I really worry about from a policy perspective is that “flooding the zone with sh*t” — to take Steve Bannon’s expression and twist it to an AI version — actually creates a type of cynicism within the information environment for the public. And so if we don’t try to anticipate some of these problems and come up with sensible regulations and labelings and some of the things that Daxton mentioned beforehand, what we do is we risk ruining the information environment such that nobody’s gonna trust anything they see. And that is a prime fertile ground for misinformation actors at that point. They will thrive in an environment where the public doesn’t trust anything they see. What do we do about the social problems around an information environment that looks like that? Congress and tech companies have to be a partner of the government on this one. One entity or the other is not going to be able to fully solve this problem. And so if you’re thinking about this as a civic information problem, then you have to think about what are some sensible regulations, and how can we bring the tech companies in the fold to help us do this and to make this a moral imperative on their end that this is something that they have to do. 

FAW: Could the pre-review of AI-generated content before it can be posted on social media platforms be considered a prior restraint?

Stewart: The parts of these laws that give prior restraint as a remedy — which is that somebody has issued an AI-generated false statement or imitation of a political actor of some kind — all had injunctions as remedies, that you could seek an injunction to have it pulled down. Getting a court order to unpublish or undo false political information is unlikely to square with the Constitution. Just very, very unlikely. So I’m not sure that the courts are the answer there. I’m guessing it would be tech companies being able to review and pull things down. Again, if they have the inclination and the robust trust and safety teams to do it. There’s just so much content that doing that in a meaningful way would be hard.

Littau: I’m going to say the controversial thing. I actually think Section 230 is kind of part of the problem. These companies right now can do all those things Daxton said, right? They can moderate. They can take things down. They’re not. It’s not illegal for them to do that, as much as Matt Gaetz wants to talk about otherwise, but they are not incentivized to do so. 

Stewart: And there’s no harm to them if they don’t.

Littau: Right, they’re not responsible for it. So I’m probably in the minority among my own peers, of saying that we need some pretty major reform for [Section] 230. The incentives are not there for companies, and I don’t think under current law the government could compel these companies to moderate. The only thing that’s keeping them from moderating is a carve out for them that treats them differently than companies that have to have editorial standards, that are responsible for the things that they publish. And so I think that if [Section] 230 were to be completely repealed, these companies all of a sudden would really, really start heavily policing political speech. They would probably make major investments in trust and safety. I don’t know if I’d want that world, necessarily. Completely gutting [Section] 230 is not the solution. But some some sort of limited liability, in some cases, for certain types of things, or providing a very robust system for users to report and flag content that is violating of the company standards, such that they are obligated to take it down if it does violate their own standards, I think, is a very reasonable thing.

One of the things that we know from this, and this was happening before Elon Musk even bought Twitter, was they weren’t enforcing those standards. They said this about Trump himself, they didn’t ban him from the platform until after Jan. 6, even though they knew internally he’d been violating the rules, basically, on the daily. So do we make the liability about not enforcing your own standards? You’ve got these rules and you don’t enforce them, so should you be liable then if it’s been flagged for your awareness? So I think that that’s probably where I would land on it, and how far we would go. And reforming [Section] 230 probably is an open question, but the people who say we shouldn’t touch it at all, I just, I have a hard time with them. It’s a scale problem with billions of posts a day, and the massive amount of information and misinformation … We’ve got to figure out some way to get a handle around it. This isn’t like the front page of the New York Times where you can only put five or six stories. These companies basically have unleashed a firehose of content on us, and it really is in our public interest to figure out some way to force these companies to be more responsible with what they publish.

Illustration shows Twitter's new logo

In this July 24, 2023, file photo, the logo for social media platform X, following the rebranding of Twitter, is seen covering the old logo in this illustration. (Reuters/Dado Ruvic)

FAW: How are these laws expected to be enforced? What about anonymous users posting deepfake content, since they can’t be identified and therefore would evade any monetary or civil penalties?

Stewart: It’s interesting you mentioned that because the practicality side of enforcement is challenging here. And I think that where these laws come from is legislators see these things happening, they say “We have to do something,” and they pass a bill. At the least, they may have deterrent value. So that people are on notice with these laws, that if you do these things, there might be consequences to you later, up to and including civil penalties. They’re not huge civil penalties, like $500,000, which again, if the only penalties are a fine, then it’s only a crime for poor people. Political campaigns can [pay] $500,000 pretty easily to pay off those sorts of fines. 

But even after that, I think the practicality issue here is you’re still dealing with due process. That somebody files a lawsuit, somebody files a claim, and it’s not like it’s resolved that day. A lot of the online chatter I got from this and questions were, well, this seems fine, right? This doesn’t violate the First Amendment because it’s false speech. I said, “Yeah, but how long is it going to take to litigate that?” These laws may have deterrent value to show that you may suffer some penalty down the road, but practically they would be very hard to enforce.

And then you get at the anonymous part, which has long kind of played any of these sorts of situations on internet speech, which is, what do you do about the speaker you can’t find? So you can’t find your defendant, or your defendant’s insolvent, they don’t have any money. Or your defendant’s from overseas where they’re outside your reach, outside your jurisdiction. So you have a law saying you can do stuff to those people, but you can’t find them. You can’t practically enforce it. So again, it’s a law that has some deterrent value to maybe restrict some potential bad actors within otherwise well-meaning campaigns. But I don’t think it’s going to deter the worst of the worst actors, because what happens when they don’t label something? I think you mentioned a very real problem here, which is that even if these laws work as intended, finding your anonymous speakers who also are going to have some First Amendment protections, even if you get through those, how do you find them? And these laws look like legislatures are doing something, but practically, they have all kinds of actual problems.

Littau: We’re talking a lot about AI, but most people in the United States haven’t used it before, and they don’t interact with it. There’s a real information gap in the American public that’s very understandable because it’s a new technology. And so, offloading that responsibility for them to sift through junk versus truth, and try to figure that out for themselves is cowardly, to be honest. 

FAW: The term “disinformation” is often loosely defined. In your piece, you mention political disinformation, but how would you define it? Who decides what is considered political disinformation?

Littau: The split is always between mis- and dis-, right? And so I would define disinformation in the realm of intent. Misinformation can very much be something that’s being produced unintentionally. But disinformation is focused on trying to accomplish a particular task and the person knows they’re lying. Some people would basically say that disinformation is propaganda, and I don’t necessarily disagree with those folks. I do think that sometimes the propagandists believe their own lies. There really is a separation between that propaganda, I think in some sense, because sometimes you’re really so caught on the cause that you’re producing content for a specific political purpose that’s trying to put the thumb on the scale, but you still believe it. And disinformation, I think, falls in the realm of people who are making things up. Back in 2016 we were talking about it in terms of non-AI stuff, with people wholesale making things up and producing websites and Facebook posts and pages that were aimed at misinforming, but it was somebody who was physically creating it. Disinformation would cover that, but I think in this case the question is who’s the disinformer when you’re using an image generator or an AI voice generator to create things. But to me, it really does come down to what your purpose is, and whether you know it’s true or not. And I think if you got somebody who’s creating for the purpose of persuading, I think that that’s classic disinformation.

Stewart: I agree. And I think that illustrates one of the challenges of tort law, for example, in managing this, which is that — and I saw this embedded in a lot of these AI and deepfake and synthetic media laws about political speech — they would try to make people liable for false speech circulated about a political campaign or about a candidate for office, or about a position a candidate took, or something like like that. Disinformation, if you get to the courts, you’re talking about defamation, or false light, but it needs to not only be false, but also needs to be defamatory. It needs to hurt your reputation somehow for you to have a claim. Well, all disinformation isn’t like that. That’s going to be what if your disinformation, deliberate disinformation, is generating false images of your candidate doing something great, of reaching out to minority candidates they usually wouldn’t be seen with, and you can’t find real pictures of it so you say, “Here they are, within the rainbow spectrum of this coalition.” Well, that’s not defamatory to anybody. If anything, it’s false, it might be misinformation, right? But who’s going to have a claim? Who’s going to have standing to sue over that and say, “You generated a too pretty picture of my opponent,” and that’s a bad act, but there’s no tort to remedy that sort of thing, and nobody would have a claim under that. And so, I think that’s some of what these laws might be trying to get at, which is that the false speech, whether it tears down one person falsely or bumps up somebody else falsely, and managing that is really, really hard within the bounds of the First Amendment.

Again, mandatory disclosure might be the most, at least legally valid way. But a lot of it really gets back to what Jeremy’s been saying, which is that, how does the information ecosystem deal with that, not necessarily how does the law and the courts deal with that. Because I think law and the courts, as much as we rely on them as a backbone of our institutions and our democratic system, are not particularly well suited to deal with these kinds of problems that the First Amendment has said, “Go at it and let false and true speech compete in the marketplace of ideas.” And the government is staying out of it, right? And that’s the problem we find ourselves in, is that there are a lot of bad actors willing to take advantage of that.

Littau:  Yeah, the marketplace of ideas was built for a time of scarcity, like speech scarcity, a limited physical context where you could gather and argue, or a limited amount of voices. And the social networks now give us the ability to have these kinds of conversations at a massive scale. And so when created content enters the chat here and it becomes something that you can [use to] generate bad images, or misleading images just by a text query, I don’t think the marketplace of ideas is really built to [withstand it]. 

Stewart: Then you’re going to have a state or elected elections administrator who’s going to get to be the umpire determining what’s false and what’s true, which is problematic as well. Again, the flip side to what Jeremy said is that the marketplace of ideas may very well be problematic, but also exceptionally problematic with a rich history of abuse, is putting the government in charge of determining what’s true and what’s false.

More on First Amendment Watch: