EFF’s Mario Trujillo on Age-Verification Laws and the First Amendment

person typing on computer
Photo by Glenn Carstens-Peters via Unsplash

By 

Several states have passed or are considering legislation aimed at verifying the ages of users before accessing social media or other online content.

These laws, often found in Republican-led states with violations resulting in steep monetary fines and threats of litigation, have been introduced in the name of shielding minors from harmful online content. However, they raise various First Amendment concerns, including what is considered “harmful” content, whether age-verification could sweep up adult users and restrict their speech, or whether the verification process infringes on users’ First Amendment right to remain anonymous online.

Since 2022, 17 states have enacted age-verification laws — South Carolina, Oklahoma, Kansas, Alabama, Georgia, Nebraska, Kentucky, FloridaIdaho, Texas, Arkansas, Indiana, Louisiana, Mississippi, Montana, Utah and Virginia — and proposals have been introduced by lawmakers in more than 20 other states, according to the bill-tracking service Plural. 

At least five of these laws — in Arkansas, Utah, Louisiana, Montana and Texas — have been challenged on First Amendment and privacy grounds.

Most recently, on April 30, the Supreme Court declined to block a Texas law that restricts minors from accessing pornographic websites with a one-sentence order.

Headshot of Mario Trujillo.

Headshot of Mario Trujillo. Photo courtesy of the Electronic Frontier Foundation

First Amendment Watch spoke with Mario Trujillo, a staff attorney with expertise in privacy law at the Electronic Frontier Foundation, a nonprofit which defends civil liberties online. Trujillo discussed age-verification laws, the First Amendment issues they raise, and the potential dangers of passing such legislation.

Editor’s note: This interview has been edited and condensed for length and clarity.

FAW: What are age-verification laws? 

MT: Basic age-verification, at least age-verification online, is a method where a website screens users and lets them in or out depending on how old they are. And so the method can be pretty invasive, going up to age-estimation, through biometric collection, or through showing a form of government ID. They can also be really low-invasive like a checkbox that says you’re over or under 18. And so depending on which method you use, there are a lot more privacy and speech problems. And if it’s not invasive, there’s a lot of accuracy problems, because just anyone can click that over-under box. So in the last couple of years, a lot of states have been testing age-verification laws to do a particular thing, and it’s usually for the purpose of restricting minors from vaguely defined harmful content. And so I think we have problems with the age-verification scheme in general because it burdens everyone’s speech and privacy. But then also, the purpose of that age-verification is usually to restrict harmful content and harmful content is usually not defined and it’s pretty vague, and so it’s really in the eye of the beholder.

FAW: More than a dozen of these age-verification laws have been signed and dozens more introduced nationwide over the past two years. Why are we seeing them now?

MT: I think there’s a real recognition that there needs to be some kind of regulation online. I think there was a big push a couple of years ago, at least at the federal level, to pass comprehensive data privacy legislation. That stalled out. That’s a hard thing to navigate at the federal level with preemption and private right of action. And so in the last few years, the federal government and state governments have been more focused on children. And I think one that’s lower hanging fruit that I think is just politically more easy to pass. But these child protection laws pose their own problems, because they’re not well-defined. And then also by only giving protections to children — or sometimes not protections at all, by blocking content from children — these laws burden everyone. So I think it’s a correct recognition that regulation needs to happen online. The first and best solution is comprehensive data privacy legislation, but everyone recognizes that’s hard, and so lawmakers have been grasping for other things. I think that’s one of the main reasons.

FAW: Where do these laws potentially run afoul of the First Amendment?

MT: It’s with the method of age-verification and with the purpose of age-verification. And so with the method, there’s speech concerns because it prevents people from browsing the web anonymously. And so if you have an age-verification scheme that requires a government ID, you’re no longer anonymous when you are browsing the internet, because your internet browsing is now tied to that government ID that you were required to submit to access the platform. I think that’s one of the main issues. There’s a lot of case law about the right to anonymous speech. I think the second issue is that it just chills everyone’s speech by dissuading people from joining these platforms. So if I have to submit my government ID, or if I have to submit to a biometric face scan to access a social media site, even if I’m an adult, and even if I can get in after I do that, I might not want to, because I don’t want to give up my data. I don’t want to give up my privacy to access a social media account. And so there’s this chilling effect by just implementing the age-verification scheme, even if an adult can pass through it. And then finally, as I mentioned earlier, the purpose of these verification schemes is usually to restrict minors from harmful content. And so minors, as well as adults, have First Amendment protections. What’s obscene to a minor is a lower standard, but there are still First Amendment protections for minors. And so to the extent you’re talking about harmful content, and that word “harmful” is not defined in any real specificity. That creates vagueness problems, and so what’s harmful to a 13-year-old might not be harmful content to a 17-year-old, and so “minors” usually in these laws are treated as a monolith. It’s just anyone under the age of 18 or 17. And so, when you apply that standard to different age groups in that category of minors, then you start running into vagueness problems.

FAW: Do you think that narrower legislation would pass constitutional muster? How would it be implemented to ensure the laws don’t sweep up anonymous speech or the speech of internet users that aren’t considered minors?

MT: Coming from my perspective, coming from a privacy perspective, coming from a free speech perspective, I think, putting aside the First Amendment issues for a second, I think this is just bad policy. And so if you want to create regulations online, yes that’s what I believe, but comprehensive data privacy legislation would be the right way to go. And so with a comprehensive data privacy scheme, you’re protecting the data of minors, you’re protecting the data of adults. There’s no requirement for an age-verification scheme because children and adults are protected together. So from a policy level, I think that’s that’s the route that I would go, but from a First Amendment perspective, to go back to your specific question, with age-verification, if you’re trying to protect minors from certain content, or if you’re trying to restrict minors from certain content, you apply strict scrutiny under the First Amendment and so the state needs a compelling interest. The restriction itself has to be narrowly tailored. That method has to be the least restrictive, and so if a challenger can find any other way to implement those protections then the law probably wouldn’t pass muster. And so I think there’s probably some debate about whether or not allowing people to voluntarily submit whether they’re minor or not, whether that would pass muster. So instead of collecting a government ID or instead of doing a biometric scan, there was just maybe a voluntary opt-in where a child can say, “Yes, I am a child,” that would probably pass constitutional muster. But because it wouldn’t be that restrictive and adults wouldn’t have to do it, the problem is that it would be highly inaccurate and probably ineffective because most children wouldn’t do that.

FAW: What do you mean by “biometric scan”?

MT: So imagine a world where these laws go into effect and then social media platforms have to implement them. A kid with a phone tries to download an app and access the social media company, before they can, there’s probably a little pop-up that says, “Hey, we need to verify that you’re over 17” — or whatever the age limit is — “Can we take a picture of your face?” So they would hold the camera to their face, and a picture would be taken. From that picture, there would be measurements taken of between the nose and the eyes, and the eyes and the ears and everything like that. And they would use that pattern, and the social media company would likely be using a third-party tool that claims it would accurately identify children compared to adults. And so, based on that face measurement, there would be either “This person is above 17” or “This person is under 18.” And so there are a handful of service providers on the market right now that claim that they can do this accurately. I think, in the same way that the checkbox is going to be inaccurate, I think age estimation in general is going to be inaccurate through biometrics. A person who’s 15 might be identified as 19. It’s a new technology, and I don’t think it’s all that proven. I think there’s also some discrimination issues that might happen. Those systems have been proved to be at least less accurate with women, less accurate with people of color. And much of that has to do with the training data. And so you know, if you train a biometric algorithm on all white male faces, it’s going to be more inaccurate with female women of color. And so if this did get implemented, there would be a lot of inaccuracy. You’d either be letting a lot of people in that you shouldn’t, or in a worst case scenario, you would be blocking a lot of people that you shouldn’t. There would be 18 and 19 year olds who have young looking faces that wouldn’t be able to access social media.

FAW: In your blog post, you note that the “answer is to re-focus attention on comprehensive data privacy legislation.” What would that type of legislation look like? How would it differ from the age-verification approach?

MT: At least with the age-verification approach, like I said, most of these laws, they’re not trying to verify a person’s age to make sure they are a kid and then once they’re a kid, to give them stronger privacy protections. There are some like that, but the majority are trying to determine who’s a minor in order to block content from minors. With comprehensive data privacy legislation, you’re not actually trying to block content. You’re rather trying to treat children and adults with dignity online, you’re trying to limit the amount of data collection. So I think there’s this universal understanding that more regulation needs to happen online. And then you ask, what’s the best kind of regulation, and where is the harm stemming from? At EFF, we believe the harm is mostly stemming from the invasive corporate surveillance that’s happening online now, the over collection of data, the misuse of data, the oversharing of the data. And that’s all sort of going back to serving people ads. It’s a corporate system meant to serve people ads to keep people online to serve them more and more ads. And so, if you reduce the amount of data that companies are allowed to collect, you reduce the amount of data that they’re allowed to share, then that kind of disincentivizes that online behavioral advertising model, and that’s where we think a lot of the harm online is stemming from rather than showing any sexually suggestive content to minors. So what does data privacy look like? I think it’s a broad requirement of data minimization. So a company can only collect data, it can only use data and it can only share data, or what’s essentially necessary to run the business. And it can’t do other operations outside of that. There are [at least] 13 comprehensive state privacy laws on the books today. A place where a lot of them fall short is they don’t have a private right of action. So if your data privacy rights are violated by one of these companies, you yourself can’t go and vindicate your rights. You can’t go to court and sue the company. You have to rely on a state attorney general to maybe bring a lawsuit. And so if you give people the right themselves, there’s going to be a lot more enforcement.

FAW: How would comprehensive data privacy legislation protect First Amendment rights?

MT: I think data privacy laws also implicate the First Amendment, but they do it in different ways and they do it in ways that don’t actually violate the First Amendment. So I’ll just take each one. So at least with the comprehensive data privacy legislation, you’re not burdening the speech rights of the user. You’re actually incentivizing the speech rights of the user. If you don’t collect as much data, people feel more comfortable to share, to be online, to speak anonymously. So the First Amendment implications of data privacy legislation on the user are not there. Privacy legislation actually helps the speech rights of users. Where the First Amendment comes into play with data privacy legislation is with the companies, and so companies are claiming a right to sell or share data. We represent a lot of coders, and so EFF believes that data is speech. And so to the extent a company shares data with another company, they are in effect speaking. And the question is: is that a speech event that we think overrides the speech and privacy rights of individuals? In this case we think data privacy legislation comports with the First Amendment. And so those corporate data sharing that’s usually private data of users. And so the First Amendment isn’t as strong when you’re talking about private data as opposed to public data. And then also these corporate data sharing agreements are there for commercial purposes. And so when you’re talking about commercial speech, that usually involves a lower tier of scrutiny. And so where the age-verification laws come in, they’re usually analyzed under strict scrutiny, so that’s the strictest form of scrutiny. We think that data privacy laws, and this is what courts have said as well, they’re analyzed under intermediate scrutiny. And so under intermediate scrutiny, they’re narrowly tailored, and data privacy legislation has an important and compelling interest. The government interest in privacy, the government interest in free speech, the government interest in security of data, and then non-discrimination. We think, and most courts have held that narrowly tailored, well-written, privacy laws passed First Amendment scrutiny, and so there’s a long list — biometric data, health data, credit reports, broadband and usage data, phone call records, purely private conversations — data privacy laws that protect that information have all been upheld as constitutional.

FAW: In what ways, if at all, do laws limiting minors’ access to pornography differ from other age-gating laws that aim to limit minors’ social media use? 

MT: I think there’s always going to be the problem that the age-gate, to the extent you’re trying to let minors in and out, you’re also going to have to age-gate adults. And so that’s the big problem. You can never like create an age-gate that only burdens the speech rights of minors, the age-gate itself is universal, the age-gate is going to burden adults, regardless of what content that you’re trying to restrict, whether it’s pornography or whether it’s social media, like the initial age-gate is always burdening adult speech. So I think that is always going to be a problem with age-verification laws. If you get past that, I think the analysis is different from a social media site compared to a site that only serves pornography. And so there’s still a lot of content on those platforms, or there could be a lot of content on those platforms that’s not sexual content. That would be legally protected speech for a minor, because it’s not harmful. It’s not obscene to a minor. But I think I agree that if there was some way to only block obscene content for minors, that analysis is different, but the problem is that the age-gate burdens adult speech, no matter what.