Privacy

Teen Girls Are Being Victimized by Deepfake Nudes. This Family Is Pushing for Protections

person typing on laptop
Courtesy of Unsplash.

By The Associated Press

A mother and her 14-year-old daughter are advocating for better protections for victims after AI-generated nude images of the teen and other female classmates were circulated at a high school in New Jersey.

Meanwhile, on the other side of the country, officials are investigating an incident involving a teenage boy who allegedly used artificial intelligence to create and distribute similar images of other students – also teen girls – that attend a high school in suburban Seattle, Washington.

The disturbing cases have put a spotlight yet again on explicit AI-generated material that overwhelmingly harms women and children and is booming online at an unprecedented rate. According to an analysis by independent researcher Genevieve Oh that was shared with The Associated Press, more than 143,000 new deepfake videos were posted online this year, which surpasses every other year combined.

Desperate for solutions, affected families are pushing lawmakers to implement robust safeguards for victims whose images are manipulated using new AI models, or the plethora of apps and websites that openly advertise their services. Advocates and some legal experts are also calling for federal regulation that can provide uniform protections across the country and send a strong message to current and would-be perpetrators.

“We’re fighting for our children,” said Dorota Mani, whose daughter was one of the victims in Westfield, a New Jersey suburb outside of New York City. “They are not Republicans, and they are not Democrats. They don’t care. They just want to be loved, and they want to be safe.”

The problem with deepfakes isn’t new, but experts say it’s getting worse as the technology to produce it becomes more available and easier to use. Researchers have been sounding the alarm this year on the explosion of AI-generated child sexual abuse material using depictions of real victims or virtual characters. In June, the FBI warned it was continuing to receive reports from victims, both minors and adults, whose photos or videos were used to create explicit content that was shared online.

Several states have passed their own laws over the years to try to combat the problem, but they vary in scope. Texas, Minnesota and New York passed legislation this year criminalizing nonconsensual deepfake porn, joining Virginia, Georgia and Hawaii who already had laws on the books. Some states, like California and Illinois, have only given victims the ability to sue perpetrators for damages in civil court, which New York and Minnesota also allow.

A few other states are considering their own legislation, including New Jersey, where a bill is currently in the works to ban deepfake porn and impose penalties — either jail time, a fine or both — on those who spread it.

State Sen. Kristin Corrado, a Republican who introduced the legislation earlier this year, said she decided to get involved after reading an article about people trying to evade revenge porn laws by using their former partner’s image to generate deepfake porn.

“We just had a feeling that an incident was going to happen,” Corrado said.

The bill has languished for a few months, but there’s a good chance it might pass, she said, especially with the spotlight that’s been put on the issue because of Westfield.

The Westfield event took place this summer and was brought to the attention of the high school on Oct. 20, Westfield High School spokesperson Mary Ann McGann said in a statement. McGann did not provide details on how the AI-generated images were spread, but Mani, the mother of one of the girls, said she received a call from the school informing her nude pictures were created using the faces of some female students and then circulated among a group of friends on the social media app Snapchat.

The school hasn’t confirmed any disciplinary actions, citing confidentiality on matters involving students. Westfield police and the Union County Prosecutor’s office, who were both notified, did not reply to requests for comment.

Details haven’t emerged about the incident in Washington state, which happened in October and is under investigation by police. Paula Schwan, the chief of the Issaquah Police Department, said they have obtained multiple search warrants and noted the information they have might be “subject to change” as the probe continues. When reached for comment, the Issaquah School District said it could not discuss the specifics because of the investigation, but said any form of bullying, harassment, or mistreatment among students is “entirely unacceptable.”

If officials move to prosecute the incident in New Jersey, current state law prohibiting the sexual exploitation of minors might already apply, said Mary Anne Franks, a law professor at George Washington University who leads Cyber Civil Rights Initiative, an organization aiming to combat online abuses. But those protections don’t extend to adults who might find themselves in a similar scenario, she said.

The best fix, Franks said, would come from a federal law that can provide consistent protections nationwide and penalize dubious organizations profiting from products and apps that easily allow anyone to make deepfakes. She said that might also send a strong signal to minors who might create images of other kids impulsively.

President Joe Biden signed an executive order in October that, among other things, called for barring the use of generative AI to produce child sexual abuse material or non-consensual “intimate imagery of real individuals.” The order also directs the federal government to issue guidance to label and watermark AI-generated content to help differentiate between authentic and material made by software.

Citing the Westfield incident, U.S. Rep. Tom Kean, Jr., a Republican who represents the town, introduced a bill on Monday that would require developers to put disclosures on AI-generated content. Among other efforts, another federal bill introduced by U.S. Rep. Joe Morelle, a New York Democrat, would make it illegal to share deepfake porn images online. But it hasn’t advanced for months due to congressional gridlock.

Some argue for caution — including the American Civil Liberties Union, the Electronic Frontier Foundation and The Media Coalition, an organization that works for trade groups representing publishers, movie studios and others — saying that careful consideration is needed to avoid proposals that may run afoul of the First Amendment.

“Some concerns about abusive deepfakes can be addressed under existing cyber harassment” laws, said Joe Johnson, an attorney for ACLU of New Jersey. “Whether federal or state, there must be substantial conversation and stakeholder input to ensure any bill is not overbroad and addresses the stated problem.”

Mani said her daughter has created a website and set up a charity aiming to help AI victims. The two have also been in talks with state lawmakers pushing the New Jersey bill and are planning a trip to Washington to advocate for more protections.

“Not every child, boy or girl, will have the support system to deal with this issue,” Mani said. “And they might not see the light at the end of the tunnel.”


Tags