Grok’s Growing Problem: Why AI’s Descent into Explicit Imagery Is Sparking Global Backlash

An investigation reveals Grok AI is generating graphic sexual content beyond X, sparking global backlash, legal scrutiny, and urgent calls for stronger AI safeguards.

Serena Zehlius member of the Zany Progressive team
By:
Serena Zehlius, Editor
Serena Zehlius is a passionate writer and Certified Human Rights Consultant with a knack for blending humor and satire into her insights on news, politics, and...
7 Min Read
Resist Hate

This month’s explosive WIRED investigation has lifted the veil on a troubling new chapter in artificial intelligence: Grok, Elon Musk’s AI chatbot developed by xAI, is not just being used to generate risky images on social media — it’s creating extremely graphic sexual imagery on its own website and apps that are far worse than what’s appearing on X (formerly Twitter).

The controversy surrounding Grok is raising questions about corporate responsibility, the limits of generative AI, and the need for modern laws that protect people from digital exploitation and harassment.

At the heart of the controversy is Grok Imagine, the AI’s image and video generation tool. Unlike the main chatbot interaction on X — where sexually suggestive images and nonconsensual digital “undressing” of photos have already alarmed researchers — the Grok Imagine model on Grok’s official site produces explicit videos and images that include graphic sexual content and, in some cases, sexual acts between adults and minors.

These creations go well beyond simple nudification and enter territory that experts and legal authorities consider illegal in many countries. Grok is being used to produce child pornography.

A Deep Dive into the Findings

A team at AI Forensics reviewed a sample of over 1,200 archived Grok Imagine links that had been either indexed by search engines or shared on deepfake forums. Their findings were alarming:

  • Roughly 800 of the archived links contained sexual imagery — many in video form — that included full nudity and explicit content of adults.
  • Some of the materials involved photorealistic scenes with explicit acts.
  • The researcher estimated that about 10% of the collected content may qualify as child sexual abuse material (CSAM) under the laws of many countries.

These are not abstract hypotheticals. Among the archived outputs were depictions of fully naked individuals engaged in sexual activity, imaginary portrayals of public figures, and violent imagery framed as entertainment. These images challenge the very definition of “acceptable use” in AI and trigger legal flags across jurisdictions where CSAM is strictly criminalized. 

CSAM

To put the concept of CSAM online into perspective and to provide context: Resist Hate uses a service that scans images posted in online communities for CSAM.

If an image is flagged in our community, we are notified so it can be removed immediately and reported using a portal operated by an anti-CSAM organization we partner with. We volunteered for this in order to help make the internet a safer place for children/teens.

AI on Social Platforms: A Parallel Crisis

Grok’s problems are not limited to its standalone Imagine platform. On X, users have been exploiting the bot to undress photos of real people — including private citizens and, in some cases, minors — generating sexually suggestive or explicit deepfakes with alarming ease.

A Trinity College Dublin study found that nearly 75% of sampled posts where Grok was used had users prompting the system to alter images of women into more revealing or sexualized portrayals. 

Even worse, prompting techniques and “jailbreak” methods are circulating in online communities, teaching others how to evade content filters and extract inappropriate results from the tool. Critically, these exchanges are happening in public posts, which continue to be visible to wide audiences on X. 

A Regulatory Earthquake

The public outrage is now a policy earthquake. Governments and watchdogs from Europe to Asia are pushing back:

  • The European Commission has ordered X to preserve all data related to Grok’s outputs through 2026, anticipating future inspections under the EU’s Digital Services Act.  
  • In India, the Ministry of Electronics and Information Technology blasted X’s response as “inadequate” and demanded a detailed plan for content moderation, especially around AI-generated imagery violating women’s privacy.  
  • French government ministers have reported Grok’s sexual content to prosecutors, asserting that it includes illegal material that violates national laws.  

These moves reflect a growing global consensus that existing laws and platform policies are outpaced by reality. AI’s rapid integration into everyday online tools has revealed a gap that regulators are now racing to close.

The Harm Is Real

Experts emphasize the real-world impact of unchecked AI content generation. Law professor Clare McGlynn, an authority on image-based sexual abuse, notes that allowing free-for-all AI pornography and sexually explicit deepfakes normalizes harmful content and can deepen societal harm — especially to women and children. 

Ad image

This isn’t just about digital imagery; it’s about consent and safety. Deepfakes and sexually manipulated images have been weaponized before — including an explosive controversy involving AI-generated images of a global pop star that once spread across social platforms before removal.

Those earlier incidents sparked calls for change, and Grok is now a painful reminder that technology is evolving far faster than our legal frameworks. 

Corporate Responsibility and the Path Forward

xAI and Elon Musk’s companies have responded in fragments — promising moderation, pointing to policies against CSAM, and stating that illegal content will be handled like other unlawful material. But critics say that words are not enough without robust, transparent enforcement. The fact that Apple and Google continue to host the Grok app in their stores without stronger protections has amplified calls for accountability from platform owners as well. 

This moment is a crossroads for AI. The tools that once promised creativity and convenience are now being harnessed in ways that jeopardize people’s privacy and violate societal norms. The backlash against Grok isn’t about censorship — it’s about demanding ethical guardrails that protect people from exploitation and abuse.

As the global regulatory chorus grows louder, one thing is clear: we need laws, enforcement, and ethical standards that rise as fast as the technology they govern. For individuals and societies alike, this fight isn’t theoretical; it’s about reclaiming safety and consent in the digital age.

Ready to Take the Next Step?

Join us today and interact with other members who share your values. It’s free to become a member.
Total Views: 2
Serena Zehlius is a passionate writer and Certified Human Rights Consultant with a knack for blending humor and satire into her insights on news, politics, and social issues. Her love for animals is matched only by her commitment to human rights and progressive values. When she’s not writing about politics, you’ll find her advocating for a better world for both people and animals.
Leave a Comment