The Knight Institute’s Alex Abdo on ‘Jawboning’ Social Media Case Before the Supreme Court

alex abdo headshot
Headshot of Alex Abdo. Courtesy of the Knight First Amendment Institute at Columbia University
By

Mis- and disinformation on major social media platforms grew exponentially during the 2016 presidential election and the COVID-19 pandemic, which prompted efforts by the U.S. government to communicate its concerns about the spread of such speech on major social media platforms.

Years later, the Supreme Court is tasked with deciding whether the Biden administration’s efforts to curtail mis- and disinformation online violates the First Amendment.

The case, Murthy v. Missouri, was brought by the Republican Attorneys General in Missouri and Louisiana, as well as a group of five social media users, including physicians who claim their posts criticizing COVID-19 policies were unconstitutionally censored and the government overstepped in its attempt to discuss COVID-19 and election misinformation with social media platform executives.

A federal judge in Louisiana granted a preliminary injunction in July blocking the Biden administration and key government agencies from communicating with the platforms, describing the government’s efforts as a “far-reaching and widespread censorship campaign.” That order was then significantly narrowed by the U.S. Court of Appeals for the Fifth Circuit, but the appellate court said officials cannot attempt to “coerce or significantly encourage” changes in online content. In its decision to take on the case in October, the Supreme Court blocked the lower courts’ restrictions on the administration’s communications with social media companies throughout the duration of the litigation. Oral arguments are scheduled on March 18.

First Amendment Watch spoke with Alex Abdo, litigation director of the Knight First Amendment Institute at Columbia University, which filed an amicus brief on behalf of neither party in the case. Abdo discussed the importance of the social media cases before the court this term, the need for clarity on the constitutional line between persuasion and coercion, and expressed his belief that it’s important for the government to have the opportunity to express its own views to social media platforms that have the power to shape public opinion.

Editor’s note: This interview has been edited and condensed for length and clarity.

FAW: Can you explain what “jawboning” is and outline the First Amendment implications of it? Why is it pertinent to this case?

AA: Jawboning is the practice of governments trying to pressure what we call speech intermediaries, which are companies or organizations that host the speech of others, to silence viewpoints that the government does not like. And the First Amendment has long been interpreted to forbid the government from trying to coerce speech intermediaries into doing just that, silencing views the government disfavors. But contours of that restriction on the government have never really been fully clarified by the Supreme Court. And that’s really what’s at issue in actually two cases that are now in front of the Supreme Court, Murthy v. Missouri and the National Rifle Association v. Vullo. Both cases raise the question of when does government pressure on social media platforms, or in the NRA case, on insurance providers, to target disfavored viewpoints, violate the First Amendment? And for a very long time, most people have understood the line to be between coercion, which the First Amendment forbids, and persuasion, which the First Amendment allows. But even if you accept that that’s the right line, the Supreme Court has never really told us where exactly the boundary is between unconstitutional coercion and permissible persuasion. It turns out that that really matters because there’s a spectrum of engagement between the government and private speech intermediaries and we need some clarity on where the line is so we know what the government is allowed to do and what the public is entitled to expect the courts to prevent the government from doing.

FAW: Your organization filed an amicus brief in support of neither party in this case, and it references the Supreme Court’s 1963 decision in Bantam Books v. Sullivan and the subsequent legal test it produced. Can you tell me about this legal test and how that line was drawn between unconstitutional coercion and constitutional persuasion?

AA: The Bantam Books case involved a commission that the state of Rhode Island had formed on morality and youth, and the commission would send notices to book distributors about books that the commission deemed offensive or objectionable with the goal of pressuring those book distributors into pulling those books from their shelves. The way they accomplished that was by essentially tacitly threatening prosecution for violating the state’s obscenity laws. The Supreme Court said that amounts to an informal system of censorship, because the state is indirectly coercing censorship, and the First Amendment forbids the state from doing indirectly what it cannot do directly, which is a sensible constitutional line to do, not just sensible, but important. It’s important to prevent the government from making an end run around the bedrock First Amendment limitation on official censorship. That was the last significant thing the Supreme Court said about this distinction between coercion and persuasion, and it’s been over 60 years since. It turns out that it’s hard to know where persuasion turns into coercion because the two concepts exist along a spectrum. And there’s just a lack of clarity in the court system about when government efforts to pressure meet that threshold.

President biden

U.S. President Joe Biden delivers remarks on healthcare coverage and the economy, at the White House in Washington, July 7, 2023. (Reuters/Jonathan Ernst)

FAW: In this case, the focus is on the Biden administration’s alleged coercion of social media companies to moderate certain content. In your opinion, where would the line be drawn here, or if there is a line to be drawn between this type of alleged coercion or just recommendations made by the government?

AA: Well, I think it’s difficult to say where exactly the line should be drawn. I think that the Supreme Court was fundamentally correct when it said that the relevant line should be between persuasion and coercion. And one of the reasons that I think that line was correct is that I think it’s important that the government has a role in being able to attempt to shape public opinion by expressing the government’s own views about matters of public significance. So I think it was legitimate for President Biden to draw public attention to the role of platforms in moderating the spread of vaccine disinformation on their sites by publicly criticizing the platforms. I think it’s legitimate for the government to call public attention to even controversial questions like that, whether you agree or disagree with the government. What the government should not be able to do is make it seem as though the platforms will suffer concrete legal consequences for failing to do the government’s bidding. One of the reasons why it’s really challenging, though, to disentangle these two things is that the government interacts with platforms in many different ways, in many different contexts, and the platforms have a variety of interests when it comes to their interactions with the government. There are a lot of reasons why the platform’s might want to please the government, because they’re under a lot of pressure right now. They’ve been called before Congress multiple times. They’re trying to stave off legislation that might threaten their bottom line. So even what we might ordinarily think of as persuasion, might cause the platform’s to behave as though they have been coerced in an effort to appease would-be regulators. And so the line gets muddied really, really quickly and it’s hard to know how to clarify the line in a way that’s especially predictable to apply.

And so what we argued in the brief that we submitted to the Supreme Court is that the court should apply this distinction with reference to three important constitutional values, the ones that we think are most at stake in jawboning cases. The first, which is the most obvious, is that the users of these platforms and the platforms themselves, have a right to participate in the online communities of their choosing, without the government effectively setting the terms of the conversation that can take place on those platforms. This is essentially the coercion principle. Users of the platforms and the platforms have a right to editorial independence in these online spaces. The second principle is that the public has a right to know what it’s government thinks about the major problems of the day, and it has a right to elect a government that is empowered to express views on the major problems of the day, and that means creating some space for the government to comment on and attempt to persuade private actors to adopt its views. And then the final principle is really a principle of accountability. What makes jawboning so troubling when it happens is that it often takes place behind closed doors, in a manner that is essentially immune from political accountability. When state actors pressure private speech intermediaries in ways that the public cannot see and respond to, in ways that the courts never learn about, and so therefore cannot adjudicate, that makes the risk of unconstitutional coercion much more acute. And so the line between coercion and persuasion should be drawn with sensitivity to the need, or to the risk, of the government acting surreptitiously or informally. So those are the principles we argued in our brief that the Supreme Court should be especially attentive to. I don’t think that embracing these principles would suddenly and completely clarify the line between coercion and persuasion, but I think it would make it easier for the courts that have to grapple with these questions to apply the line in a more consistent and sensible way.

FAW: Reporters Committee for Freedom of the Press (RCFP), NetChoice and the Electronic Frontier Foundation (EFF) also filed amicus briefs in support of neither party. RCFP expressed concern of the creation of a “too-sensitive test for coercion”; EFF asked for clarification on the Bantam Book test’s applicability; and NetChoice argued for a “clear rule” to prevent government coercion of social media platforms. Are your organization’s arguments in line with theirs?

AA: I think they are mostly consistent. One of the very practical considerations that animated our brief, and that I think also animated those briefs, is that we can all think of circumstances where we think it would be perfectly legitimate for the government to try to convince a platform or a newspaper or even a bookseller not to publish certain information. Let me give you a very concrete example. It’s one that we gave in our brief. In 2004, The New York Times learned about the Bush administration’s warrantless wiretapping program, and it was planning on publishing a story revealing the warrantless wiretapping program for the first time. And members of the Bush administration, from the White House, met, I think repeatedly, with the editors of The New York Times, to try to convince them that publishing the story would endanger national security, would make it harder for the Bush administration to surveil terrorists. And The New York Times responded by holding on to this story for I think a full year before they published it. Now, I think The New York Times was wrong to hold on to the story for as long as it held on to it. But it seems to me that it was perfectly legitimate for The New York Times to be able to hear from the government about the risks of publishing what the government argued at the time was highly classified information about ongoing intelligence gathering activities. I think the decision about whether to publish that information has to ultimately be up to The New York Times. But so long as the government didn’t threaten the Times with prosecution, imply that it might prosecute The New York Times or threaten some other kind of regulatory retaliation, it seems appropriate and desirable even for us to have a First Amendment that allows The New York Times and the government to talk in that case about potential risks that The New York Times might not be able to appreciate, given that it’s a news organization, not a member of the U.S. government’s intelligence apparatus. And there are plenty of other examples like that that are easy to imagine, where it seems perfectly appropriate for the government and private actors to talk about constitutionally protected speech. That’s in part with what makes it so challenging. And I’ll say too that the example I give is, while I think it was appropriate, I also think it was in a different respect a little bit troubling, because it was a behind-the-scenes effort by the government to suppress speech. And those kinds of efforts, ones that are informal or surreptitious, create the greatest risk of unconstitutional coercion because they’re not accountable to the public, they’re not accountable to the courts. But even under those circumstances, I think it’s sometimes appropriate for the government to be able to have that flexibility.

The new logo of Twitter

The new logo of Twitter is seen in this illustration taken, July 24, 2023. (Reuters/Dado Ruvic)

FAW: When Elon Musk took over Twitter, or now X, we saw the disclosure of information described as the “Twitter Files.” The release consisted of internal emails and communications between top Twitter executives and government officials. Would you say the Twitter Files are relevant here?

AA: I don’t think there was a vast conspiracy to coerce the platforms into suppressing disinformation online. So I think most of the sensationalism around the Twitter Files is unwarranted. I do think there was at least one instance that was identified by the plaintiffs in the Missouri v. Biden case, now called Murthy v. Missouri at the Supreme Court, that troubles me, that I think probably crossed the constitutional line. It was an instance in which I think two White House officials berated one of the platforms for failing to take down, I think, vaccine disinformation or posts reflecting vaccine hesitancy, and then said in one of the later emails in the thread [something like], “We’re considering other options because you’ve delayed.” That, to my ear, sounds a bit like a threat. And the tone of the email is also very, at the very least aggressive, if not threatening, and I think there’s a pretty good argument that that exchange reflected unconstitutional jawboning. 

The vast majority of the other engagements that I’ve looked at, and I have not looked at all of them, look like permissible efforts by the government to communicate with the platforms or the public about their views on the risks of allowing disinformation about vaccines and the elections to flow freely on the platform services. Let me give you two examples. One has to do with the CDC [Centers for Disease Control and Prevention]. A lot of the platforms of their own volition during the pandemic adopted policies around medical misinformation, but it turns out that the platform’s generally are not public health experts. Maybe some of them employ some, but they don’t have the public health expertise that the CDC has. So the platforms would regularly ask the CDC to weigh in on whether a particular post reflected accurate medical information or not. And I think it was perfectly legitimate for the platforms to ask the CDC to weigh in. And I think it was perfectly legitimate for the CDC to respond by saying “We think this is accurate” or “We think this is inaccurate.” If the CDC had gone further and said, “And if you don’t take it down, we’ll regulate,” or “And if you don’t take it down, you might not like what comes next,” that would very likely cross the constitutional line, but that’s not what the CDC did. The CDC responded with its views based on the public health expertise that is concentrated at that agency. And again, that seems to me like legitimate government participation in trying to engage with the public.

The second example has to do with President Biden publicly saying that the platforms are “killing people.” Whether you agree with that sentiment or not, it seems to me that the president of the country ought to be able to weigh in on what everybody is talking about. And let me give an example of that from the other side of the political spectrum. You might remember that during the Trump administration President Trump criticized the NFL for allowing Colin Kaepernick to take a knee in support of the movement for Black lives during the singing of the National Anthem during NFL games, and I think it was a protected form of dissent for Colin Kaepernick to do so. And I think the NFL was entitled to take whatever position it wanted but I think it was appropriate for the NFL to allow NFL players to do so. But I also think it was constitutionally OK for President Trump to criticize the NFL for doing so. I think it would have been unconstitutional for President Trump to have tried to coerce the NFL into firing Kaepernick or into changing the rules. But I don’t think it was unconstitutional for him to express his view, or the government’s view, on whether NFL players should be allowed to dissent during the singing of the national anthem.

FAW: Why would the Attorneys General of Missouri and Louisiana file this case claiming government officials unconstitutionally suppressed conservative viewpoints online, but then advocate in favor of Texas and Florida’s social media laws, which would allow state regulation of social media companies, in the NetChoice cases before the court? Is this all based on viewpoint discrimination? Are the stances contradictory to one another?

AA: I do think there’s a deep tension between the position the states have staked out in these two cases, and I think it is unconstitutional for the government to try to coerce the platform’s into changing their content moderation policies, and I think it is unconstitutional for states not to coerce but to directly set the content moderation policies of the platforms through legislation like the ones passed in Florida and Texas. I don’t know how to reconcile those two positions. There’s probably an uncharitable political explanation for what’s going on that I’m not going to speculate on, but I do think there’s tension between the two. The First Amendment has to work for everyone. And that means the role of government that we set in the boundaries that we draw using the First Amendment has to be enforced consistently for everyone to have faith that these constitutional principles apply even-handedly. It would deeply undermine the purposes of the First Amendment if the Supreme Court were to answer those two questions in a politicized way, and I think that it’s unlikely that that’s how the court is going to resolve those two cases. I think it’s likely to say that Florida and Texas cannot force the platforms to carry speech that they don’t want to carry. I think it’s also likely to say that the First Amendment forbids government coercion. I’m not especially worried about that possibility, but we’ll see.

FAW: Do you think that a decision in Murthy v. Missouri would be impactful to other decisions on the social media cases before the court this term, or vice versa?

AA: I don’t know. Let me try to answer in this way. The question that I think has not been focused on enough and may ultimately be the most significant, is one of the secondary questions in the NetChoice cases. So the NetChoice cases and the Murthy case, deal with the authority of the government to set content moderation policy, either directly through state legislation or indirectly through government coercion. But the NetChoice cases also concern transparency provisions in Florida’s and Texas’ social media laws. And there is a big debate in the First Amendment community over the scope of the government’s authority to impose transparency requirements on social media platforms. And what I am most worried about this term is a ruling by the Supreme Court that would effectively foreclose even narrowly tailored and reasonably drafted transparency laws that would ultimately, I think, serve democracy by making it easier for the public to understand how these online forums for political discourse work. And we’re starting to see a lot of these laws crop up in the states, Congress has been debating transparency laws, and all of that debate could be ended if the Supreme Court ruled broadly in rejecting the transparency provisions at issue in the NetChoice cases. That’s my biggest concern.

More on First Amendment Watch: