An article in the New York Times in July 2025 reported that the National Center for Missing and Exploited Children (NCMEC) claimed to have received over 485,000 tips about AI-generated child pornography in the first six months of the year, a dramatic increase relative to 2024. Most of these came from Amazon AI Services. But as documented in a report from the Center for Internet and Society of Stanford Law School, there is no evidence that any of these images flagged by Amazon’s algorithms were actually AI-generated, and many were not even child pornography. Critics have contended that NCMEC has long justified its vast federal funding (over $40 million a year) with misleading statistics that generate moral panic over pervasive child exploitation. Publicizing the number of “tips” they receive fails to distinguish whether these refer to new material or re-circulated old images, which is more often the case. Now that AI is much in the news as a threat to human safety generally, we may have the latest vehicle for stirring up public hysteria over sexual threats.
However, a study of over 700 adults who admitted sexual attraction to children demonstrated that most had never encountered AI-generated images of children; instead, their most commonly used materials were sexualized cartoons or drawings of minors. The study found no evidence that engaging with sexual fantasy materials was associated with variance in self-reported willingness to engage in sexual offences involving children. Instead, it found that those who were more sexually satisfied reported a lower level of willingness to engage in such behaviors, while those who held offense-supportive beliefs reported a higher willingness. To the extent that fantasy materials contribute to sexual satisfaction for an individual, they may reduce the risk of offending; to the extent that they contribute to positive beliefs about children’s enjoyment of sex with adults, they may increase the risk.





