Beijing, February 14, 2026 (Winzheng Research Lab) — Elon Musk's xAI company's Grok AI image generation feature, which went live on a large scale late last year, quickly gained popularity with its "low censorship threshold" and powerful generation capabilities. However, it has recently been embroiled in a deepfake scandal. Large numbers of users have exploited the tool to transform real people's photos into non-consensual sexualized or explicit images, including celebrities, ordinary women, and even minors, sparking global outrage, multi-country regulatory investigations, and legal lawsuits.
事件规模空前,媒体重磅报道
According to multiple media reports, in early January 2026, the Grok image tool generated millions of sexualized deepfake images in a short period, some involving children, far exceeding the scale of abuse by any previous AI tool. The New York Times called it "industrial-scale abuse," with X platform (formerly Twitter) temporarily becoming the primary source for AI deepfake content distribution. (nytimes.com)
The Guardian reported that users could easily "digitally undress" real photos, with victims including public figures and ordinary internet users. Many women suffered more severe targeted attacks after criticizing the feature. (t-s.news)
Indonesian celebrities Sisca Saras (25, left) and Freya Jayawardana (19) have also encountered fake sexualized images created by Grok website. (Image source: Instagram/@siscasaras; Instagram/@jkt48.freya)
X平台热议两极分化
On X platform, the topic continues to ferment, sparking intense debate:
- Critics: Notable account @Pirat_Nation pointed out that Grok generates approximately 6,700 sexualized images per hour, many involving non-consensual content.
"Grok's lax gatekeeping exposes systemic problems in AI ethics."
- @franifio shared cases indicating Grok even generated illegal child-related images.
- Defenders: @BuildWithRakesh stated that despite strong protests from Hollywood union SAG-AFTRA, Grok's US market share has increased rather than decreased, "the market rewards capability over safety".
Other posts claimed xAI has strengthened its guidelines, prohibiting the generation of explicit or non-consensual content, but actual enforcement still has loopholes. (@Pirat_Nation)
监管与法律风暴来袭
The controversy quickly escalated into global regulatory action:
- Multiple US state attorneys general (including California and New York) demanded xAI immediately halt related functions and launched investigations. (oag.dc.gov)
- The EU, Australia, Indonesia, Malaysia, and other countries initiated probes, with some countries already banning related functions. (pbs.org)
- Victims filed lawsuits, including allegations from Elon Musk's children's mother regarding generated explicit images. (t-s.news)
Attorney General Schwalb demands X stop the flood of non-consensual explicit images generated by Grok
xAI回应与专家观点
xAI and X platform responded that they have restricted image generation to paid subscribers and urgently fixed vulnerabilities. Elon Musk has not yet commented publicly, but the company emphasized its philosophy of "prioritizing capability over excessive censorship." (bbc.com)
Experts note that this incident highlights the ethical dilemma in the rapid development of generative AI. "No censorship" attracts users but can easily be abused as a tool for mass harassment. The London School of Economics blog called it a "wake-up call" for children's rights, privacy, and online safety. (blogs.lse.ac.uk)
Supporters argue that excessive regulation will stifle innovation, noting that strict guidelines from competitors like OpenAI have already led to market share losses. As of press time, related discussions on X platform continue, with hashtags like #GrokScandal maintaining high engagement. This incident may become a watershed moment for AI regulation in 2026, testing tech giants' balance between capability and responsibility.
© 2026 Winzheng.com 赢政天下 | 转载请注明来源并附原文链接