A recent risk assessment by Common Sense Media has determined that the AI chatbot Grok, developed by xAI and integrated into the social platform X (formerly Twitter), is not safe for users under 18. The findings underscore growing concerns about the unchecked exposure of young people to potentially harmful content and interactions within AI-powered environments.
The Problem With Grok: A Perfect Storm of Risks
Grok differs from other AI tools not because it’s uniquely dangerous, but because it combines multiple high-risk features in one accessible package. Researchers from Common Sense Media conducted tests using simulated teen profiles across various settings, including the platform’s default mode, a so-called “Kids Mode,” and alternative behavior settings.
The assessment revealed critical failures in age verification: Grok does not reliably identify or restrict access for underage users. This means teens can engage with adult-oriented features, including erotic roleplay, sexually explicit conversations, and the generation of disturbing content. Even when “Kids Mode” is activated, the chatbot has been found to produce inappropriate and harmful material, such as sexually violent language, biased responses, and detailed instructions for dangerous activities.
Deepfakes and Viral Harm
The risks extend beyond direct interaction. Because Grok operates as an account within X, AI-generated content – including images and responses – can be shared publicly with ease. This rapid dissemination has already led to instances of nonconsensual deepfake images, including those depicting minors, being generated and distributed through the platform.
Compounding the issue is xAI’s response: rather than removing abusive features, some were simply placed behind a paywall, suggesting monetization takes priority over safety.
Why This Matters: The Pace of AI Outstrips Oversight
The Grok case highlights a broader trend: AI development is outpacing ethical and regulatory safeguards. While many platforms struggle with content moderation, Grok’s integrated nature within a major social network dramatically escalates the potential for harm. The study raises urgent questions about how to protect young users in an environment where AI tools are designed to be persuasive, adaptive, and easily shareable.
“Grok’s ineffective Kids Mode, permissive content generation, and instant public sharing create a perfect storm for harm when teens are involved,” says Robbie Torney, Head of AI and Digital Assessments at Common Sense Media.
What Parents Need to Know
Common Sense Media advises families to avoid allowing minors to use Grok entirely. Parents should engage in open conversations with their children about social media and AI usage, providing clear boundaries and education. The responsibility falls on caregivers because many AI companies have yet to prioritize child safety.
