A report by Stanford researchers cautions that the National Center for Missing and Exploited Children doesn’t have the resources to help fight the new epidemic.
A new flood of child sexual abuse material created by artificial intelligence is threatening to overwhelm the authorities already held back by antiquated technology and laws, according to a new report released Monday by Stanford University’s Internet Observatory.
Over the past year, new A.I. technologies have made it easier for criminals to create explicit images of children. Now, Stanford researchers are cautioning that the National Center for Missing and Exploited Children, a nonprofit that acts as a central coordinating agency and receives a majority of its funding from the federal government, doesn’t have the resources to fight the rising threat.
The organization’s CyberTipline, created in 1998, is the federal clearinghouse for all reports on child sexual abuse material, or CSAM, online and is used by law enforcement to investigate crimes. But many of the tips received are incomplete or riddled with inaccuracies. Its small staff has also struggled to keep up with the volume.