US Attorneys General call for national law to combat AI CSAM

US Attorneys General call for national law to combat AI CSAM

Source Node: 2255284

The National Association of Attorneys General, the body that all US states and territories use to collaboratively address legal issues, has urged Congress to pass legislation prohibiting the use of AI to generate child sex abuse images.

In a letter [PDF] to the leaders of the Senate and the House of Representatives on Tuesday, the Attorneys General requested lawmakers appoint an expert to study how the content-making machine-learning technology can be used to exploit children, with the goal of establishing new laws, rules, or regulations to protect against AI-generated child sexual abuse material (CSAM).

Advances in generative AI technology have made it easy to create realistic images that depict real people in highly compromising or disturbing made-up scenarios, or fake people in fictitious circumstances. Online safety groups and law enforcement agencies have noticed an increase in these so-called deepfakes, images or videos in which a real person’s face is pasted on someone else’s body to produce fake content. Deepfakes of children’s photos can be used to tweak existing CSAM to churn out more of that vile content.

Online text-to-image tools can also fabricate CSAM that looks realistic but does not depict an actual child. Although companies operating text-to-image services and software have strict policies and often block images containing nudity, users can sometimes find ways to bypass restrictions. Open-source models can also generate CSAM and are harder to police as they can be run locally.

Creating pornographic deepfakes depicting real people is illegal in at least some parts of the United States. Earlier this year, prosecutors in Long Island, New York, charged a man for creating and sharing sexually explicit deepfakes depicting “more than a dozen underage women,” using images he took from social media profiles. This machine-made material was shared on porno sites along with the victims’ personal information and calls for fellow perverts to harass them. The 22-year-old man was sentenced to six months in prison and given ten years’ probation with significant sex offender conditions.

However, no federal legislation prohibits making NSFW deepfakes without consent. The laws are murkier when it comes to completely fake AI-generated CSAM, in which the victims are not real people.

The National Association of Attorneys General argued that such material is not victimless, as tools capable of generating such images was likely trained on actual CSAM, the creation of which harmed actual children. Making more entirely virtual AI could therefore fuel further child exploitation and spread more revolting and illegal content online.

The AGs therefore want laws or other tools to combat deepfake porn, whether it’s of real people manipulated into fake situations without permission, or totally fake stuff that was likely developed from actual illegal material.

“One day in the near future, a child molester will be able to use AI to generate a deepfake video of the child down the street performing a sex act of their choosing,” Ohio’s Attorney General Dave Yost said in a statement. “Graphic depiction of child sexual abuse only feeds evil desires. A society that fails to protect its children literally has no future,” he said.

The letter was spearheaded by South Carolina’s Attorney General Alan Wilson, according to the AP.

“First, Congress should establish an expert commission to study the means and methods of AI that can be used to exploit children specifically and to propose solutions to deter and address such exploitation,” the document states.

“Second, after considering the expert commission’s recommendations, Congress should act to deter and address child exploitation, such as by expanding existing restrictions on CSAM to explicitly cover AI-generated CSAM. This will ensure prosecutors have the tools they need to protect our children.”

Addressing fake CSAM is tricky. Typical techniques that detect the illegal content relies on hashing known images that are being circulated online. It’s therefore difficult to identify new images, especially if they have been doctored using software. The Attorneys General, however, believe lawmakers must act because the technology will continue to evolve.

“We are engaged in a race against time to protect the children of our country from the dangers of AI. Indeed, the proverbial walls of the city have already been breached. Now is the time to act,” the letter concludes. ®

Time Stamp:

More from The Register