NetChoice’s Chris Marchese on Fighting Against Social Media Restrictions

Chris Marchese NetChoice
Headshot of Chris Marchese. Courtesy of NetChoice
By

The trade association NetChoice — which represents tech companies like Meta, Google, TikTok and Snapchat — has been at the forefront of the emerging battle between attempts to regulate online speech and social media access and First Amendment protections for freedom of speech.

NetChoice challenged social media laws in both Texas and Florida which similarly aim to prevent social media companies from moderating content based on users’ viewpoints. The Supreme Court agreed in September to hear the challenges, and justices will be tasked with determining whether the First Amendment prohibits such legislation.

The court heard nearly four hours of oral arguments on Monday, and the justices seemed skeptical of the Texas and Florida laws. Justice Samuel Alito expressed concern over social platforms’ content moderation, questioning if it’s “anything more than a euphemism for censorship.” But Justice Brett Kavanaugh disagreed with the use of the term censorship as applied to the actions of private companies.

“When I think of Orwellian, I think of the state, not private individuals,” Justice Kavanaugh said.

Director of NetChoice’s Litigation Center, Chris Marchese, said in a press release following the arguments that the organization is “confident the Supreme Court will agree” with its First Amendment arguments. 

“The First Amendment not only protects free speech, free expression, and free thought from government interference. It does so unapologetically,” he said. “Just as the government couldn’t force Benjamin Franklin to publish its preferred messages in his newspapers, Florida and Texas can’t force websites to curate, display, and spread their preferred content.”

NetChoice has also been successful in challenging online safety laws protecting minors in Arkansas, California and Ohio. Three federal judges preliminarily blocked the laws in the last six months, while raising constitutional concerns on the government regulation of online speech curated by private companies.

First Amendment Watch spoke with Marchese last month about the group’s efforts to stave off attempts by the government to censor and limit access to certain speech and expression online. Marchese discussed the First Amendment rights of private social platforms to curate content, the comparison between social platforms’ and traditional media companies’ right to publish information, and the constitutional issues with online safety laws aimed at protecting minors.

Editor’s note: This interview has been edited and condensed for length and clarity.

FAW: How would you describe the Texas and Florida cases to someone who has never heard of them? What is NetChoice? Why did the social media giants team up on this effort? What are you fighting against?

CM: NetChoice is a trade association based in Washington, D.C. that fights for free expression and free enterprise online. Our members include technology businesses of all sizes, so that includes names that most people recognize — Amazon, Google, Meta, etc. — as well as a bunch of companies that people use in more niche areas, like eBay and Pinterest. We represent an industry, not necessarily individual businesses, and that is essential to understand because the way that we approach our litigation is the same way. We fight on behalf of free enterprise and free expression. We decided to litigate against Florida and Texas because both of those laws violated our principles, and it violates the fundamental aspects that make the internet the internet. 

After Jan. 6 in early 2021, a bunch of social media, as well as media applications, decided to remove President [Donald] Trump from their services. At the time, there was a big fear that President Trump might post inflammatory content that would later then be blamed on the websites that hosted that content. On Jan. 6 itself, there were a lot of calls for President Trump to address the country on Twitter and to tell people to be peaceful and to go home, and it wasn’t clear in those hours whether or not that message was going to be received, and so out of an abundance of caution, a lot of private companies removed his account. 

Well, the states of Florida and Texas responded by passing laws that basically told the private businesses like Meta, like Google, like YouTube, etc., but also a bunch of other companies, including Etsy, that they were not allowed to make their own decisions about how they were going to host content, post content, promote that content, disseminate that content, curate that content. So this was a huge effort to have the government sort of control the online experience. Both states said that all they were trying to do was to promote free speech by prohibiting private companies from removing speech arbitrarily, but in reality, what both states were doing was infringing on the First Amendment rights of private businesses to set their own editorial rules, as well as the rights of users, because the government was now deciding how those users were going to engage with social media and so forth. So we decided to sue because these laws violated the Constitution many times over.

FAW: After signing SB 7072 into law, Gov. Ron DeSantis, in a statement, said the goal of the law was to curb censorship which he believed was a result of unfavorable views compared to those of the “Silicon Valley ideology.” How do you respond to that?

CM: As a general matter, I think the best way for the government to protect the First Amendment is by not violating the First Amendment. And so while I agree with Gov. DeSantis, that we should have a marketplace of ideas, I disagree with him that the government can impose its own version of a market and then run that market on private businesses’ own property. I personally am right of center. I probably would have voted for Gov. DeSantis if he were the nominee for president. But ultimately, I part ways with him in not seeing the inconsistency with not only his own policies, or his stated policies of respecting limited government, private property and all the rest, but also, I think it just sets a very dangerous precedent for all lawmakers if they can suddenly start dictating the rules that will govern how ideas are shared, and created, and engaged with. It is one thing for the state of Florida to create its own social media website. If the state did create its own social media, it would be bound by the First Amendment, meaning the government could have its own version of what it wants in the marketplace. And if it turns out that that is, in fact, the preferred version that people want, I guarantee you that competitive pressure will result in seeing others sort of move in that direction. But ultimately, the government can’t just expropriate private property and say, “Because you have so many users, we want you to be run the way that we would run you.” The governor is not the board of directors. He is not the CEO. I think something to keep in mind too, is that this was back in 2021, and it was fresh off the 2020 election, where many Republicans, including Gov. DeSantis, felt that social media companies made the wrong call by initially suppressing some of the links to the New York Post story about Hunter Biden’s laptop. I think what DeSantis was trying to say when he signed the bill into law, was “Enough is enough. I have disagreed with you. This is getting out of control. We’re now going to step in, and even though that means it’s going to be the government telling private businesses how to exercise their own First Amendment right, it is warranted because of X,Y and Z reasons.” Obviously, we disagreed, but that was in 2021. And something that I noticed in 2023 is that when he was a candidate for president, Gov. DeSantis was very forceful in pushing back against Gov. Haley’s proposal to have identification for social media, the idea that you would have no anonymous speech on social media, and Haley received a lot of criticism for that proposal. And some of the strongest came from Gov. DeSantis who pointed out that the Founding Fathers themselves were huge proponents of anonymous speech. They wrote the Federalist Papers and the anti-federalist papers with pseudonymous names. So all of which is to say, I think that maybe, as these issues are now being discussed and debated more broadly and of course, are being litigated in the courts, lawmakers are understanding that what they think they are signing into law is not necessarily what they’re actually putting into law.

The social media platforms Instagram, Facebook, Snapchat, Twitter and LinkedIn.

The social media platforms Instagram, Facebook, Snapchat, Twitter and LinkedIn. Photo by Susanna Granieri.

FAW: Do you think social media platforms should be compared to traditional media companies? Would you describe social media platforms themselves as editorial curators?

CM: Absolutely. For legal reasons, there is no distinction between what we call the traditional media and social media. Obviously, in everyday parlance, we think of them as being similar but different, and that’s fine. It makes sense to distinguish newspapers, from news stations, from online news. Ultimately though, when it comes to legal protection under the Constitution, your constitutional rights do not hinge on whether the American public or American politicians think that you yourself are “the media.” Instead, the First Amendment is much wiser than that and protects activities for all. So what I mean by that is, obviously everybody knows that it protects free speech, but it also has the free press clause, and a lot of people think that that means that it protects the press as an entity, but in reality, it actually protects the activity of publishing. Back when they passed the First Amendment, the term “press” had to do with publishing information. And so when you think about the fact that it protects speaking, and disseminating information, you can then sort of understand that whether you are The New York Times, which by the way, is a for-profit corporation, or The Wall Street Journal, which is owned by News Corp, which is a massive for-profit corporation, or you are Meta, or X, or even eBay or Pinterest, ultimately, you have your own First Amendment right. And that includes the First Amendment right to editorial control or editorial discretion. It goes by various names, but ultimately what it comes down to is if you are publishing, you then have the right to decide what goes into it, what stays out of it, how it’s presented, to whom it is presented, and when it is presented. And so whether it’s a newspaper that decides what content is going to go on the cover, and what stories are going to get buried at the end, or what doesn’t even make it into the paper, that is the same First Amendment right as social media determining which content goes beyond their standards and should be removed, or what content falls short of their ideal, but perhaps isn’t that problematic and should instead just be restricted to a certain segment of users. 

Ultimately, these are decisions that private businesses have to make because it really does come down to them exercising their own First Amendment right. They also have concurrent business concerns, right? Ultimately, you don’t really want to be associated with a ton of violent content. It was a big deal a few years ago when videos surfaced of truly horrific, and I do mean it’s astoundingly difficult to watch, content of Americans being tortured or beheaded in the Middle East. And that content was quickly removed from the social media platforms because they exercise their First Amendment right to remove that incredibly harmful content. But that content remained up on the parts of the internet that are not moderating content.Some people would say that it’s more free speech-oriented to allow all content, and unfortunately, believe it or not, super violent content actually is constitutionally protected speech. That’s another thing that I think lawmakers are starting to realize is when they wrote these laws, they wanted to protect First Amendment speech. They wanted to make sure that private businesses couldn’t remove speech that was constitutionally protected. So for Republicans, what they probably had in mind is that Twitter would no longer be able to remove posts that misgendered somebody, or to punish a user who refused to use another user’s preferred pronoun. In reality, the First Amendment protects that, as well as really awful speech, including that you can lie about having won military honor, or earned military honor. You can picket outside a fallen soldier’s funeral. You can burn the American flag. The First Amendment also protects the dissemination of videos depicting animal crush, which was a weird, horrific thing where women were paid to kill animals with their heels. And it turns out that you can criminalize the killing of animals, but you cannot punish the dissemination of a video depicting the killing of animals, no matter how horrific it is. So in other words, the First Amendment protects a lot of truly awful speech and as a result, if these laws were allowed to take effect, you would have a lot of truly awful speech on the internet, and I think what perhaps lawmakers didn’t appreciate, is that the more social media websites are flooded with awful content, and the more that they become a cesspool, the less that users will want to go on in the first place, meaning you end up getting less speech overall. You will be turning over forums for discussion to the trolls, and all the, for lack of a better term, “normal” users will flee, because who wants to log on and instead of being content that is relevant to you, you just see the truly awful content that your fellow humans create?

FAW: People often question why a tweet or a post is removed and say it’s a violation of their First Amendment rights. How would you explain how moderation policies work to those users? Are their concerns reasonable?

CM: Under law, our rights are protected against the government. In other words, the government cannot infringe on our First Amendment right. That is different from when private individuals or private businesses do not run our speech, or in our opinion, violate our free speech rights. We do not have any constitutional right to post on Facebook, or to tweet on X, just as we do not have a First Amendment right to have our letter to the editor published in The New York Times, or the First Amendment right to appear on local news to sound the alarm over something that we think is very concerning to the community. Ultimately, we exercise our own rights by, for example, determining which products and services we use. A lot of users were not comfortable with the decisions that private businesses made, and they sought out alternatives, and believe it or not, and the reason why I say believe it or not, is because politicians are constantly saying that there are no alternatives, but that is nonsense. President Trump had Truth Social, which he expected to become a massive rival to Meta, except it turns out that Truth Social, which was founded and created for the explicit purpose of being more oriented toward free speech, needed content moderation. And if you look at the rules on Truth Social, they are much more similar to traditional social media rules than people might expect. But it turns out that a lot of people do not actually want to go on websites that don’t have too much content moderation. Now, where this gets dicey is that, ultimately, what we sort of have to do as users is be comfortable with the idea that, depending on the topic, or depending on the view that we want to express on the topic, we might need to use different platforms, right? We might not be able to use just one platform for everything that we want to create and share with the world. And that’s OK, because ultimately, we users benefit from having a diversity of content moderation rules operating in the marketplace of online speech, right? Whether or not you agree with Elon Musk’s changes to how X moderates content, it is ultimately a good thing in the marketplace of online speech that there is experimentation and diversity in how the various services operate, and how they conduct review of content.

FAW: NetChoice has been involved in successful cases in Arkansas, California, and most recently in Ohio, challenging laws that aim to protect minors online. What is the crux of the trade group’s arguments? How does the action of “age-gating” violate the First Amendment?

CM: It’s worth noting that although these laws have been billed as so-called child safety or child online safety or child privacy or minor privacy protection laws, in reality, what they are actually amounting to is regulation of online speech, and not just online speech that minors would potentially access, but speech that adults would also be accessing. So what these laws all have as their basic premise, is that there is content online that is harmful to minors and as a result, the government needs to step in and limit how minors access that harmful content. 

Different states have taken different approaches, but broadly speaking, one approach is to have outright age-verification, where when you go to login to a social media website, you have to authenticate and prove your age. And when you’re doing that, you are also inadvertently authenticating who you are. Because in order to prove your age, you have to upload, for example, your driver’s license, or you have to put in the last four digits of your social security number and some other variable that proves that you are who you say you are. And so the reason why this is a very problematic approach, and ultimately, an unconstitutional one, is because this applies to every user, right? The social media website has no idea who you are when you’re just trying to log on. As a result, everyone has to prove his or her age when they go on. And that also means that you are using some type of age verification software. 

In the state of Arkansas, for example, the law required the use of third-party service providers. So you would be giving your very private sensitive information to a company that you probably have never heard of. And that company would then tell the social media website whether or not you are of a certain age. Then you have California’s approach which is called the Age-Appropriate Design Code. In California, instead of doing outright age verification like Arkansas, they took a more indirect approach where they said that covered websites, which in the case of California included, I would say, like 95% of the internet, would have to sort of reasonably ascertain the age of users. So it doesn’t have to be perfect, but you have to do enough digging on your users to figure out roughly how old they are, and then use that information to tailor what content that user sees when they access your website. For example, The New York Times actually filed an amicus brief in support of our First Amendment argument, because as The New York Times pointed out, they would be covered by this law, and they would have to figure out the age of everyone visiting their website, and then they would have to tailor how the website promotes content, which stories are shown to minors, because some of the content in the news stories themselves would, under California’s law, be considered harmful to minors. For example, if there are stories about Hamas’ attack on Israel, and those stories included vivid details about the savagery going on over there, that would probably be considered harmful to minors in California and as a result, The New York Times is not allowed to show that news story to minors. And so, again, you can see why this would implicate the First Amendment because even though it’s being billed as “We’re just trying to protect children’s privacy so that websites can’t collect their data and serve them by algorithm,” in reality, what they’re actually doing is saying, “We want you to collect even more data on your users, use that data to then determine how your algorithms serve content, and the content will then have to be sort of bucketed as either appropriate for minors or not appropriate for minors.” They don’t want to get sued. They don’t want to be on the hook for liability. And so they’re going to just remove all content that could even remotely be considered harmful to minors, sanitizing the entire internet for all users, adults and minors alike. Everyone would have the same bottom line, sanitized internet. So everyone would have the same internet experience as what five year olds are currently experiencing. That is horrendously unconstitutional.

FAW: Do you feel as though the attempts by Florida and Texas to push for more free speech is contradictory to later attempts to limit the types of speech accessible to minors?

CM: There’s an inherent sort of contradiction between Florida and Texas’ 2021 laws that we’re currently litigating at the Supreme Court, which would mandate keeping up content even if that content is in fact harmful to minors, versus 2023 Texas and Florida, where both of them considered bills that are along these lines, and Texas actually passed it. In fact, Florida right now is considering Senate Bill 1, which would also be sort of like an anti-access bill. And it’s just interesting to think that in 2021, we consistently pointed out in our public statements, in our testimony, and then of course, in our lawsuit itself, that this was going to mean more content online that is absolutely filthy, violent, harmful and just outright disgraceful. And yet lawmakers said “No, no, no, free speech means free speech.” And now we are hearing from lawmakers that they believe that there’s harmful content online, that there’s so much harmful content online, that teenagers should not even be allowed to access it without going through a ton of hurdles. And keep in mind, we’re not just talking about pornographic websites. Everybody agrees that you should have to be at least 18 to access Pornhub or something like that. We’re talking about going on Reddit, Wikipedia, even, because lawmakers draft ridiculously broad definitions. My point in pointing this out is whether you should keep up all content, which is what the states said in 2021, or you should limit access to content, if not remove that content outright, as they’re saying in 2023 and now 2024, ultimately, that just goes to show that this is unworkable. The government cannot be involved in content moderation. These have to be private decisions made by private businesses, because ultimately, no one should be in the business of deciding what every single private company should be doing in terms of content moderation, because we already see that society itself is highly divided. And you can imagine that what California considers harmful to minors is probably different from what Texas and Florida consider harmful to minors. You can imagine that Florida and Texas would probably say you need to limit, if not remove, access to transgender related advocacy content, whereas California would say if you remove that content, you are actually harming minors. So it just becomes a weapon for the government to use in its own culture war.

Something else to just sort of add on there too, an under-appreciated thing is that if Florida and Texas, or California got their way, before we know it, every single state will have their own regulatory regime for online speech. Lawmakers love to call these bills protecting online privacy or something like that, in actuality, every single one of them requires the collection of far more invasive data collection, and in some cases, actually requires that that data be shared with third parties that you don’t know. It really is harmful to have the government making these decisions about the internet because, as everybody I think sort of recognizes, more and more of life is becoming digitized and moving online. And if it is, in fact, going to be the case that most content creation and dissemination happens online at least over the next decade, then the last thing we want is the government to be involved in deciding how we engage with our content.

More on First Amendment Watch: