The Parents Television and Media Council (PTC) is calling on the U.S. Department of Education to reconsider its plan to promote the use of Artificial Intelligence in school classrooms.
In a public comment filed with the department, PTC VP Melissa Henson wrote:
“While we appreciate the Department of Education’s interest in preparing students for the future, we need only to look at recent history to see how foolhardy it would be to move forward on rapid deployment of such a relatively new technology in our classrooms with the most vulnerable and impressionable segment of our population. Over the last decade, schools have embraced new technology—often at the urging of tech companies—without sufficient evidence of educational benefit, without adequate guardrails, and without fully understanding the risks.
“The results have been deeply troubling.
“Laptops, tablets, and other digital tools were sold to parents and educators as the keys to personalized learning and higher engagement. Instead, we’ve seen increased distraction, declining core skills, and rampant exposure to harmful content—even on school-issued devices. Common Sense Media reports that more than 40% of teens who viewed pornography at school saw it on school-issued devices. Students quickly learned to bypass filters, and schools have proven unable to fully protect them from explicit or dangerous material.
“These harms have been compounded by the mental health crisis linked to screen overuse. The Journal of the American Medical Association has found that children showing signs of screen addiction are at significantly greater risk for suicide. Psychologists like Jonathan Haidt have documented the doubling and tripling of depression, anxiety, and self-harm among teens—especially girls—following the rise of smartphones and social media. We are still grappling with the fallout of that uncontrolled experiment.
“Now, we are poised to make the same mistake with AI. Artificial intelligence is far more powerful, less predictable, and potentially more invasive than earlier educational technologies. We already have disturbing evidence of harm:
- AI-powered chatbots that engage in sexually explicit conversations with self-identified minors.
- AI-driven deepfake pornography targeting teen girls.
- AI algorithms that can amplify harmful content as effectively—or more so—than social media feeds.
“The idea of embedding AI into the daily lives of children without first establishing robust, enforceable safeguards is reckless. The federal government should not be encouraging early adoption of AI in classrooms until we can guarantee:
- Demonstrable educational benefit, supported by peer-reviewed research, not industry marketing.
- Stringent privacy protections, ensuring student data is never harvested, sold, or repurposed.
- Content safety controls that cannot be easily bypassed.
- Age-appropriate design standards that protect against grooming, explicit content, and exploitation.
- Ongoing independent oversight with authority to halt use if harms emerge.
“We have been here before. The promises of ‘ed-tech’ have too often come from Silicon Valley marketing teams rather than from solid pedagogy, evidence-based research, or child development needs. This time, the stakes are even higher. AI can process, mimic, and manipulate human interaction at a scale and speed no prior technology could.
“We must resist the temptation to roll this out first and regulate later. If we fail to learn from the past decade’s mistakes, we risk creating another lost generation—children whose cognitive, emotional, and social development will be shaped, and potentially harmed, by untested AI tools.
“We urge the Department to prioritize rigorous, independent evaluation and child protection measures before promoting AI use in K–12 settings. Our children’s safety, privacy, and mental health must come before Big Tech’s market share.”