Latest News

09-29 Dawn Hawkins: AI Chatbots Are Grooming, Radicalizing, and Harming Children—Congress Must Act

No comments

September 16, 2025.

Today, I submitted written testimony before the U.S. Senate Judiciary Subcommittee on Crime and Counterterrorism for the hearing on the harms of AI chatbots. The committee asked me to focus my remarks on one central thesis: online abuse inflicts real-world harm, and our laws must recognize it as such.

In preparing this testimony, I dug deep into the research literature—some of it brand new, much of it under the radar. The footnotes in my written testimony are filled with studies that deserve wider attention. I encourage colleagues and allies to take a look. In the weeks ahead, I’ll highlight some of the most important findings so we can ground our advocacy in the strongest evidence possible.

Here are some of the points I shared:

Digital abuse is not “just pictures” or “just a game.” As I told the committee: “They froze, their hearts raced, they felt violated. They couldn’t sleep. These are the same trauma responses as in-person assault. Online abuse is abuse.” Victims of CSAM, sextortion, forged or deepfake abuse images, or even virtual reality assaults describe the trauma as indistinguishable from in-person violation.

  • The pathways from online harm to offline destruction are clear. Sextortion alone has been linked to more than 50 teen suicides in the U.S. since 2021. Families describe living in fear, changing schools, and uprooting their lives. As I testified: “Digital victimization sets off the same destructive trajectories as offline abuse: withdrawal, health decline, family conflict, and in too many cases, escalation to self-harm or in-person exploitation.”
  • Chatbots multiply these risks. They simulate intimacy without empathy or safeguards, making them especially dangerous for adolescents wired for belonging. Reports and lawsuits already show chatbots encouraging self-harm, sexual roleplay with minors, and even violence toward parents. Extremists are also manipulating bots to normalize hate and radical ideologies. As I warned: “What looks like ‘just words on a screen’ is actually a steady drip of persuasion conditioning the next generation of violence. It is already happening.”
  • Adolescents are uniquely vulnerable. With immature impulse control and a heightened need for social approval, teens are easy targets for chatbot “friendship” that turns manipulative or abusive. Some even say they prefer chatbots to people, a substitution that can distort healthy development.
  • Congress has urgent options. I recommended immediate steps (though I admit, not comprehensive):

 

>>> Pass the App Store Accountability Act (H.R. 3149) to require accurate ratings, truthful descriptions, and parental consent before minors download apps.

>>> Pass the Kids Online Safety Act (KOSA) to impose a duty of care on companies to design for child safety.

>> Establish whistleblower protections so employees inside AI companies can safely expose harms.

Read More Here.

lynnswarriors09-29 Dawn Hawkins: AI Chatbots Are Grooming, Radicalizing, and Harming Children—Congress Must Act