Attorney Jess Miers on the Supreme Court Decision in Moody v. NetChoice

A police officer walks up the steps of the Supreme Court in Washington
A police officer walks up the steps of the Supreme Court in Washington March 2, 2015. The Supreme Court will hear “King v. Burwell” on March 4, a challenge on whether residents in at least 34 U.S. states are eligible for federal tax subsidies to help them buy health coverage under President Barack Obama’s Affordable Care Act. (Reuters/Joshua Roberts)

By 

Earlier this month, the Supreme Court abstained from deciding on two controversial social media laws in Florida and Texas but provided a framework that found that moderation of content by social media companies is akin to editorial decisions made by newspapers.

The trade association NetChoice — which represents tech companies like Google, Meta and Snapchat — challenged both the Texas and Florida laws that aim to prevent social media companies from moderating content based on users’ viewpoints. The Supreme Court agreed in September to hear the challenges, and justices were tasked with determining whether the First Amendment prohibits such legislation.

The court heard nearly four hours of oral arguments in February, and the justices seemed skeptical of the laws. But instead of deciding on the laws’ constitutionality, the court vacated and remanded the cases back to the lower courts. It found that both the Eleventh Circuit, which heard the Florida case, and the Fifth Circuit, which heard the Texas case, erred in their analyses.

“First, the First Amendment offers protection when an entity engaged in compiling and curating others’ speech into an expressive product of its own is directed to accommodate messages it would prefer to exclude. Second, none of that changes just because a compiler includes most items and excludes just a few. Third, the government cannot get its way just by asserting an interest in better balancing the marketplace of ideas,” wrote Justice Elena Kagan in the 9-0 decision. “In case after case, the Court has barred the government from forcing a private speaker to present views it wished to spurn in order to rejigger the expressive realm.”

The court’s decision to keep a hold on the Texas and Florida laws was both lauded and criticized by tech policy and free speech advocates.

Headshot of attorney Jess Miers.

Headshot of attorney Jess Miers.

In an interview with First Amendment Watch, Jess Miers, former senior counsel at Chamber of Progress with a focus on the intersection of law and the internet, discussed the court’s decision and its First Amendment implications. Miers described the decision as a positive one, agreed with the court’s comparison of content moderation by social media companies to journalistic editorial curation, and asserted that the court’s framework protects algorithmic curation under the First Amendment.

Editor’s note: This interview has been edited and condensed for length and clarity.

FAW: For readers who may be unfamiliar with the case, tell us about Moody v. NetChoice. Why is it important?

JM: It’s been a long journey. It started out as two cases, NetChoice and CCIA v. Moody, which is the Florida law case. And then you have NetChoice and CCIA v. Paxton, and that’s the Texas version of the case. Both cases involve internet laws that were passed in the state of Florida, in the state Texas, that have to do with content regulation. For Florida, it was all about ensuring that social media platforms could not “deplatform” or “censor” political candidates. It was a more narrow interpretation than the Texas bill. The Texas bill went a little bit broader and regarded overall viewpoint discrimination. At the end of the day, both laws were about interfering with the editorial discretion of private social media platforms and the decisions they make via content moderation. Those cases had a long journey up to the Supreme Court. That brings us to July 1, when we got the Supreme Court opinion, in which the Supreme Court was asked to consider whether these laws are unconstitutional, and inherent in that question is whether social media services have First Amendment rights to engage in content moderation and editorial discretion. And with that in mind, do these laws interfere with those editorial rights? And that brought us to the opinion of the court which decided to vacate the lower courts and to remand these cases back to the lower courts in Texas and Florida. But they didn’t just do a remand. They included with it what I’ve been calling a First Amendment manual, to ensure that the lower courts reach the correct constitutional decision here, and you can really hear from the Supreme Court opinion that that decision should be one that is in line with the fact that social media platforms are private entities that operate like online publishers, and as online publishers, they are entitled to editorial discretion to the extent that the First Amendment provides it. So we will see what happens next, but that would be the summary of these two cases.

FAW: What did you think of the decision? Did any aspects jump out at you as flawed?

JM: I thought this is the best decision that we could have gotten in this case, given the makeup of the bench and the issues at hand. I think the court did a tremendous job in articulating that, at the very least, if you take anything away from this opinion, that we can’t just put a broad blanket, common carriage categorization for social media platforms. Even in some of the concurrences, like Alito’s and Justice Thomas’, where they read more like dissents, there was at least a underlying agreement that social media platforms, online services, they do have some aspects, or operate in some aspects, like online publishers and when they use their moderation capabilities, that is an editorial decision that’s protected by the First Amendment. I was actually pretty impressed with the court’s overall opinion. I think there’s some interesting notes, at least if you’re reading in between the lines, for a lot of the state legislatures to take away. Specifically with regards to algorithmic curation as well. I’ve seen some takes out there about how the court either did not sanction algorithmic curation as a First Amendment protected editorial decision. I’ve seen discussions about how the court was extremely vague about it. I had a different take on what was said. I think the court’s opinion, if you read between the lines, actually gets to the point of saying, ‘Look, curation in itself, curating content, whether it’s online or offline, is a protected First Amendment editorial decision.’ And then the court goes on to say, whether that’s in the online or the offline realm, it doesn’t matter. And so while the court has said in a footnote that they’re not really considering algorithmic curation, when you read the framework, it pretty much applies to algorithmic curation. It’s very difficult to come up with a scenario, even with AI, that would suggest that the service is not making an editorial decision in programming their algorithms to decide what content to display and what content to not display. And that’s going to be a big problem for states like New York that just passed this state bill banning addictive algorithms.

The social media platforms Instagram, Facebook, Snapchat, Twitter and LinkedIn.

The social media platforms Instagram, Facebook, Snapchat, Twitter and LinkedIn. Photo by Susanna Granieri.

FAW: The Miami Herald Publishing Company v. Tornillo case protected editorial decisions curating content made by journalists at a major newspaper. With social media platforms, much decision making about posts are done by algorithm. Why didn’t this seem to make a difference to the court? What are your thoughts on the Tornillo case and journalistic curation decisions versus algorithmic decision making by social media platforms?

JM: This is where I think the opinion is actually very interesting and deserves a little bit closer treatment. The court kind of says two things. You see it in Justice Barrett’s opinion specifically, and I think there’s a mention of it in the overall court opinion as well. This scenario where an algorithm relies on no decision, no guidance from the social media platform, and just as making these rogue, machine driven decisions about what is displayed to a user. And what you hear from the court is, well maybe in those scenarios, and Justice Barrett mentions like AI, for example, maybe in those scenarios, there aren’t actually any editorial decisions being made that would be protected by the First Amendment. When you read Justice Barrett’s note and Justice Jackson’s note without any of the other contexts of the opinion, I can see how folks are taking this away to mean algorithmic curation may or may not be protected by the First Amendment. But when you read the totality of the opinion, and it discusses things like curation, which is the way in which you display third-party content to a user, it is protected by the First Amendment. And then you think about the practical, technological ways in which these algorithms are implemented by social media platforms, what the court is actually doing is creating this sort of edge case that, in my opinion, doesn’t actually exist. And even for algorithms that are based on serving content that is, let’s say it’s entirely based on user decision. There’s no way to really disentangle the user decision from the trust and safety guidance that is still a core component of the algorithm when it’s making a decision as to what to prioritize and what to display. And even if it was only based on user choice, the choice to display content based on user likes or interests, the choice to display content chronologically even, is still an editorial choice that the platform is making. If a platform decides, ‘We’re going to be a platform that we don’t use any user noise, we’re not going to give it any of our editorial guidelines, and we’re just going to be a chronologically ordered feed of third-party content,’ that in itself is an editorial decision. It’s not a very good one, and it might not be a very good social media platform, but that is a choice that the platform has made when it comes to the ordering and display of third-party content. So when you look at the opinion which again, goes back and says, in plain terms, the decision as to how third-party content is displayed and ordered to a user is protected curation, you realize then that, okay, well, that’s the entire point of these algorithmic curation services. They are choices. They involve choices made by the platform to ultimately display and organize third-party content in a way that the platform finds to be the most logical for itself and for its audiences.

FAW: In Tim Wu’s op-ed in the New York Times following the decision, he wrote that the NetChoice decisions show that the “First Amendment is spinning out of control” and “is beginning to threaten many of the essential jobs of the state, such as protecting national security and the safety and privacy of citizens.” How do you respond to that?

JM: I think there’s a lot of crossed wires in that article. I think that first of all, the NetChoice cases have nothing to do with national security or privacy. If anything, they involve the personal safety and liberties of marginalized folks who are going to be on the worst part of the receiving end of these types of laws should they become enacted. So I think national security and privacy are red herrings that those who are willing to push a more pro-censorship agenda will utilize, and I think that’s what Tim is doing here. If we were talking about TikTok, I think it would be a slightly different conversation, but that wasn’t the situation here. And to the point about the First Amendment being out of control, the only thing that I have to say to that is that that’s the point. Specifically, the founding fathers did not want the government to be able to control our ability to speak. And I find the NetChoice opinion to be completely in line with the intention of the First Amendment.

FAW: Florida Attorney General Ashley Moody posted on X on July 1: “BREAKING NEWS: SCOTUS Unanimously Sides with Florida in Social Media Case … We are pleased that SCOTUS agreed with Florida and rejected the lower court’s flawed reasoning — invalidating our social media law.” The post received added context from users that thought the post was misleading. What do you think of it?

JM: I saw it, I quote-tweeted it, and I replied to it. AG Moody has lied about every single step of this litigation on social media and otherwise, and her tweet did not surprise me in any way. I think it’s an extreme form of gaslighting coming from Florida. I actually did a tweet response that quoted several of the passages that very unambiguously read that Florida and Texas were in the wrong. I think the way that she’s able to get away with framing it in the way that she did is because, at least with Florida, the Eleventh Circuit, for the most part, reached the right decision when it came to the First Amendment. And so you see the court in this opinion say something to the extent of ‘We agree with the Eleventh Circuit,’ but they were agreeing with the Eleventh Circuit to the extent that the Eleventh Circuit court was saying that these laws are likely in major violation of the First Amendment. That is not how AG Moody presented that discussion, though. I believe that, as a lawyer, as somebody who has been following these cases for a long time, I think she very much understands the implications of the opinion. I think she understands the case quite well. And I think instead of playing lawyer, she’s playing politician and gaslighting her constituents in the process.

FAW: The court concluded that the Fifth Circuit erred in its First Amendment analysis of the Texas law. If that’s true, then why didn’t the Supreme Court make a decision instead of vacating and remanding the cases back down to lower courts?

JM: Honestly, I think you can ask the same question of most of the decisions that have come out of this court recently. I think the court has been hesitant, at least speaking just about tech policy. We saw the court punt the Gonzalez v. Google case last year. They did not want to opine on Section 230. I think we’re seeing the same thing here with the NetChoice case opinions. This court knows that these cases are controversial. I think if I was like a fly on the wall and was listening to those conversations, I think remanding it for further discussion to the state allows the states to reach the First Amendment conclusion without the Supreme Court having to reach the First Amendment conclusion, which would make this an even more controversial decision. And so I think it’s like what we’ve been seeing with some of the other cases. If the court can punt via procedural issues instead of having to make decisions on the merits, then I think that is always going to be the route that this court specifically takes. But again, I think it’s important to recognize that, unlike in the Gonzalez case, where the court just sent it back down and said, ‘We’re not going to make a decision on this at all,’ the court could have gone that way here. They could have just said, ‘This is vacated and remanded. Try again and make sure that NetChoice is very clear with regards to their standing and the harms and redressability, etc.’ They could have just left it at that, but they didn’t. They vacated and remanded and then included a several page opinion as to how the courts should probably come out based on the First Amendment considerations of these private companies. And so you kind of have the court saying where this should have gone, and where these courts erred without them having to make that final decision. It’s a really tight line for them to walk to keep themselves out of sort of the controversial aspects of this case.

More on First Amendment Watch: