A Wisconsin man has been charged with sharing AI-generated child sexual abuse material (CSAM) with a minor, marking one of the first criminal cases involving AI-generated CSAM. The man, Steven Anderegg, allegedly solicited requests for sexually explicit images of young children on Instagram and used an AI-powered image generator called Stable Diffusion 1.5 to create them. Prosecutors claim that Anderegg had over 100 images of suspected AI CSAM in his possession, two of which he shared with a 15-year-old he met on Instagram.

Anderegg admitted to using Stable Diffusion to create the images in chat transcripts that were flagged by Instagram’s parent company, Meta, to authorities. He was arrested and charged with exposing a child to harmful material and sexual contact with a child under age 13, pleading not guilty and being released on a $50,000 bond. Stability AI, the company behind Stable Diffusion, stated that the images Anderegg created were likely made with version 1.5 of the software, which was developed and released by AI startup Runway ML in October 2022.

Stable Diffusion 1.5’s developers have acknowledged that the AI model was trained on a cache of illegal child sexual abuse material, which has caused concern among experts. The National Center for Missing and Exploited Children (NCMEC) reported an increase in AI-generated CSAM cases, with some popular AI tools being used to create illegal content. Tech companies like Stability AI claim to have built-in protections to prevent the misuse of AI for harmful purposes, but concerns remain about the effectiveness of these measures.

Despite the increase in AI-generated CSAM cases, prosecuting individuals who create or share this type of material poses legal challenges. Possessing or generating explicit images of entirely fictional children, not based on real individuals, may fall into a legal gray area. However, experts believe that cases like the one in Wisconsin may provide a roadmap for prosecutors, allowing them to charge suspects with other crimes related to the creation or distribution of AI-generated CSAM.

In response to the growing issue of AI-generated CSAM, NCMEC and tech companies are working to address the challenges posed by these new forms of illegal content. Stability AI has not yet registered with NCMEC to report CSAM incidents, but the company stated its commitment to engaging with the organization and attending conferences on the topic. Experts also stress the importance of proactive measures to prevent the misuse of AI tools for harmful content and to ensure that tech companies are vigilant about potential abuses of their platforms.

The case in Wisconsin is part of a larger trend of using AI tools to create illegal sexual abuse material, with several recent incidents involving individuals using web-based AI tools for this purpose. NCMEC has recorded thousands of reports of AI-generated CSAM, raising concerns about the potential for further increases in such reports. As the legal and ethical implications of AI-generated CSAM continue to be debated, experts emphasize the need for collaboration between law enforcement, tech companies, and advocacy groups to address this evolving issue effectively.

Share.
Exit mobile version