The National Center for Missing and Exploited Children (NCMEC) has reported a significant increase in artificial child sexual abuse imagery being generated by AI image generators. In 2023, NCMEC received 36.2 million reports to its CyberTipline, with nearly 5,000 of those being attributed to generative AI. This represents a concerning trend that is expected to continue growing as companies improve their ability to detect and report such content.

There has been a surge in reports of AI image generators being used to create illegal sexual abuse images, including deepfake nude photos of students in schools across the United States. Some of the most popular new AI tools have used illegal CSAM to train their models, and prosecutors have already charged individuals in criminal cases involving AI-generated CSAM. This trend is alarming and highlights the need for increased efforts to combat the spread of such content.

A small but growing number of generative AI companies have begun cooperating with NCMEC to track and flag potential CSAM. Companies like OpenAI, Anthropic, and Stability AI have joined the effort to help identify and report illegal content. This collaboration is providing insights into how AI-generated CSAM is being produced and distributed, including the use of text prompts or image manipulation. The majority of this content is spread through mainstream social media platforms, but it often originates from open source models or off-platform sources.

With AI technology advancing rapidly and proliferating quickly, there is concern that the scale of AI-generated CSAM could escalate, overwhelming law enforcement agencies already struggling to keep up. It is becoming increasingly difficult to distinguish fake AI-generated CSAM from real content, presenting a significant challenge in identifying and removing illegal imagery. While some major generative AI players have agreed to work with NCMEC, smaller platforms that are also involved in distributing such content have not yet joined the effort.

In the first quarter of 2024, NCMEC has already seen approximately 450 reports per month of CSAM stemming from AI, indicating that the trend is continuing to escalate. The organization expects this number to grow further in the coming months as the technology becomes more sophisticated and widespread. Efforts to combat the spread of AI-generated CSAM are ongoing, but coordinated efforts between tech companies, law enforcement, and organizations like NCMEC will be crucial in addressing this concerning trend and protecting vulnerable individuals from exploitation.

Share.
Exit mobile version