Google’s Gemini AI Labeled ‘High Risk’ for Kids in New Safety Review

A new report is sounding the alarm about Google’s Gemini AI products and how they interact with younger users. Common Sense Media, a nonprofit that rates technology and media for kids, has labelled Gemini as “high risk” for children and teens—despite Google’s efforts to add safety features.

The report comes at a time when AI safety for minors is under the microscope. Recent lawsuits have linked AI chatbots to teen suicides, and Apple is reportedly considering Gemini as the engine for its next-gen Siri, which could put it in millions of homes.

Gemini’s kid tiers are just adult tiers with filters

According to Common Sense Media, Google’s “Under 13” and “Teen Experience” tiers for Gemini are not child-first products. They’re just the adult version of Gemini with some safety filters added on. That’s not enough, the group says.

Robbie Torney, the nonprofit’s senior director of AI programs, said kids at different stages of development need tailored guidance. A one-size-fits-all chatbot, he said, puts kids at risk of getting content they’re not ready for.

Risks include unsafe content and mental health advice

Gemini could still provide kids with inappropriate or unsafe information—including sex, drugs, alcohol, and sensitive mental health topics. For parents, the latter is especially scary.

AI and teen well-being isn’t theoretical. This summer, OpenAI was sued after a 16-year-old boy used ChatGPT to discuss his suicide plans for months before he took his life. Character.AI, another chatbot service, has faced similar lawsuits. Common Sense’s “high risk” label means a lot.

The Apple connection makes it even more serious

The report comes as Apple is reportedly planning to use Gemini as the large language model for its upgraded Siri in 2026. If that happens, teens will be interacting with Gemini more often, whether for homework, health, or personal struggles. Without additional safeguards, experts warn, risks will multiply.

Common Sense says kids and teens need guidance that grows with them. What’s safe for a 16-year-old might be totally inappropriate for an 11-year-old but Gemini doesn’t yet reflect those differences in a meaningful way. Google responds but concedes gaps.

Google responded to the report by highlighting its safety features. The company said it has strict policies for users under 18 and works with outside experts to test Gemini continually. It also pointed out that Gemini doesn’t present itself as a “friend” or form pseudo-relationships with users, an issue with some other chatbots.

But Google admitted some responses had gotten through its filters. In its statement, the company said it had already added new protections to fix those gaps. It also noted that some of the examples cited by Common Sense seemed to reference features not available to kids, but didn’t have access to the test data to confirm.

How Gemini stacks up against rivals

Common Sense has been evaluating AI platforms across the industry, and Gemini isn’t the only one drawing concern. Meta AI and Character.AI were labelled “unacceptable,” a notch above “high risk.” Perplexity received the same “high risk” designation as Gemini, while ChatGPT landed at “moderate.” Anthropic’s Claude, which is designed for adults 18 and older, was rated “minimal risk.”

This broader comparison underscores that while Gemini isn’t the worst offender, it also hasn’t hit the bar for child safety that many parents and educators are hoping to see.

7 Practical Ways Enterprises Can Improve Savings a ...

Italy Becomes First EU Nation to Approve Sweeping ...