xAI’s AI chatbot Grok is facing growing scrutiny after users on X prompted it to create sexualized images of real people, including minors. The images appeared publicly on the platform and, in some cases, showed minors in minimal clothing. Some of the posts were later removed, but they circulated before the takedown. The images appeared to violate Grok’s own use rules, which ban sexual content involving children.
Claim 70% Off TipRanks This Holiday Season
- Unlock hedge fund-level data and powerful investing tools for smarter, sharper decisions
- Stay ahead of the market with the latest news and analysis and maximize your portfolio's potential
Grok is developed by xAI and is integrated directly into X, where users can tag the chatbot and request a text or image response. Unlike many AI tools that operate in private chats, Grok posts its responses publicly. As a result, harmful content can spread quickly if controls fail. In response to the incident, Grok posted that it had found gaps in its safety systems and said fixes were being made.
From Product Scandal to Regulatory Risk
After the images surfaced, the French government said Grok had produced clearly illegal content without consent. France said the case may breach the European Union’s Digital Services Act, which applies to large online platforms. Under the rules, companies must reduce the risk of illegal content spreading, not just remove it after the fact. France said it referred the matter to its public prosecutor.
This point matters for investors because the law focuses on systems and safeguards, not intent. Even if content is taken down later, weak controls can still lead to penalties. Since Grok posts directly to X, the line between AI tool and social platform becomes less clear. That setup may raise the level of risk compared to chat-based AI tools.
xAI has said Grok is meant to be more open than other AI models. Last year, it added a Spicy Mode that allows some adult content. However, Grok bans sexual content involving minors and bans the use of real people’s likenesses for such material. The recent images suggest those limits did not always hold in practice.
What it Means for AI Investors
More broadly, the case highlights a shift in how AI risks are viewed. Safety failures are now treated less as design flaws and more as business risks. Regulators in Europe and Asia are moving faster, and child safety is a strict line. As AI tools scale, the cost of stronger controls may rise.
For investors, the takeaway is simple. AI growth is no longer judged only by speed or reach. It is also judged by how well companies can deploy AI without triggering legal action or forced changes. As AI becomes more public and more powerful, the downside risk from weak safeguards is becoming harder to ignore.
We used TipRanks’ Comparison Tool to align notable traded companies that employ chatbots similar to Grok, ChatGPT, Claude, and Gemini. It’s an excellent tool to gain a broader perspective on each stock and the AI industry as a whole.


