Generative artificial intelligence (GAI) has revolutionized how we interact with technology, offering innovative tools and solutions. However, this technology also introduces significant risks, particularly to children. At the National Center for Missing & Exploited Children (NCMEC), we are deeply concerned about the potential for GAI to be used in ways that sexually exploit and harm children.
Over the past two years, NCMEC’s CyberTipline has received more than 7,000 reports related to GAI-generated child exploitation. These reports represent only the cases we are aware of, and there are likely many more unreported or unidentified instances. As this technology becomes more pervasive, and public awareness grows, we expect these numbers to grow.
GAI technology enables the creation of fake imagery, including synthetic media, digital forgery, and nude images of children, through tools like “nudify” apps. These manipulative creations can cause tremendous harm to children, including harassment, future exploitation, fear, shame, and emotional distress. Even when exploitative images are entirely fabricated, the harm to children and their families is very real.
The risks don’t stop at images. GAI is being used to manipulate children in other ways, such as generating realistic text prompts that predators use to groom or exploit victims. Offenders have even leveraged GAI in sextortion cases, using explicit AI-generated imagery to coerce children into providing additional content or money. Financial sextortion is a growing threat, and GAI tools make it easier for offenders to target children.