Protect our Children

The Grok account has posted thousands of “nudified” and sexually suggestive images per hour. Even more disturbing, Grok has generated sexualized images and sexually explicit material of minors.

X’s response: Blame the platform’s users, not us. The company issued a statement on Jan. 3, 2026, saying that “Anyone using or prompting Grok to make illegal content will suffer the same consequences as if they upload illegal content.” It’s not clear what action, if any, X has taken against any users.

As a legal scholar who studies the intersection of law and emerging technologies, I see this flurry of nonconsensual imagery as a predictable outcome of the combination of X’s lax content moderation policies and the accessibility of powerful generative AI tools.

Targeting users

The rapid rise in generative AI has led to countless websites, apps and chatbots that allow users to produce sexually explicit material, including “nudification” of real children’s images. But these apps and websites are not as widely known or used as any of the major social media platforms, like X.

State legislatures and Congress were somewhat quick to respond. In May 2025, Congress enacted the Take It Down Act, which makes it a criminal offense to publish nonconsensual sexually explicit material of real people. The Take It Down Act criminalizes both the nonconsensual publication of “intimate visual depictions” of identifiable people and AI- or otherwise computer-generated depictions of identifiable people.

Read More Here.