
In this week's Plugged In by Wired Parents, AI is reshaping childhood in ways most parents haven't noticed yet. YouTube Kids is drowning in AI-generated videos with robotic voices, cheap animation, and potentially harmful content that somehow slips past filters.
New research reveals AI chatbots are failing children spectacularly. SproutBench, the first comprehensive test of AI safety for kids, found a disturbing pattern: the more engaging and human-like a chatbot becomes, the more dangerous it proves for young users. The most interactive systems consistently failed age-appropriate safety checks.
Meanwhile, your child's face is becoming their digital passport. AI age verification technology can now determine if someone is 13 or 15 just from a selfie, potentially solving online safety enforcement—while creating massive new privacy concerns about collecting children's biometric data.
TL;DR: Dodgy AI kids’ videos, unsafe chatbots & face-scan age checks
First time reading? Join other parents looking to keep themselves informed. Sign up here.
Need To Know
🎞️ AI-Generated Kids’ Videos Flood YouTube
AI-generated kids’ videos are flooding YouTube, raising concerns over quality, safety and media literacy for families.
AI-made videos with robotic narration and low-quality content are spreading fast on YouTube.
Many mislead children with poor facts, unsafe storylines, or jarring content.
Copyright misuse and lack of oversight are growing issues.
Experts recommend co-watching, screen time limits, and media literacy lessons.
Highlights urgent need for stricter platform policies.
Parents browsing YouTube Kids may notice an unsettling trend: an explosion of AI-generated children’s videos. These clips often use text-to-speech voices, cheaply produced animation, and recycled scripts. While appearing harmless at first, they can contain misinformation, confusing storylines, or inappropriate themes that slip past filters.
Child development experts are warning that such content may disrupt children’s learning and expectations of what’s “real.” Unlike traditional children’s media, these videos are not vetted by educators, psychologists, or producers trained in child safety. There’s also the issue of copyright abuse, with AI churning out thinly disguised versions of popular shows or characters.
While YouTube has pledged to strengthen moderation, AI tools are outpacing detection. Parents are advised to use parental controls, set time limits, and most importantly, watch alongside their children to spot red flags. Teaching children to question what they see online is also becoming a critical skill.
The rise of AI-generated children’s content is part of a wider challenge: how do platforms prevent their algorithms from pushing harmful, addictive, or misleading media to young audiences?
🔗 Source: Parents
🧒 Putting Children at the Centre of AI Ethics
New research puts children at the centre of AI ethics, with a benchmark showing how even the most advanced chatbots can fail younger users.
SproutBench: a new benchmark that tests AI safety specifically for children.
Covers three age groups (0–6, 7–12, 13–18) with 1,283 prompts about privacy, emotional dependency, and harmful behaviours.
Tested 47 major language models, finding serious safety gaps—particularly in more interactive systems.
Highlights an inverse safety trend: the more engaging a model is, the less age-appropriate it often becomes.
Seen as a critical tool to guide safer, child-focused AI design.
For years, AI safety has been tested with adults in mind, leaving children’s needs on the sidelines. That may now be changing. Researchers have released SproutBench, a benchmark designed to evaluate how large language models (LLMs) respond to children across different stages of development.
The benchmark applies over 1,200 carefully designed prompts, ranging from requests for comfort to questions about privacy, to assess whether AI responses are developmentally appropriate. Early findings suggest a troubling pattern: the more interactive and human-like a model is, the less safe it often proves to be for children.
SproutBench is part of a growing push—including tools like Safe-Child-LLM and MinorBench—to build AI systems that are not only powerful but also responsible and age-aware. For parents, the research underscores both the promise and the risks of AI becoming a fixture in children’s lives.
Are AI Companies Actually Using SproutBench?
As of now, there's no public indication that AI companies are integrating SproutBench into their internal evaluation workflows. SproutBench is a recent academic benchmark (published in August 2025) crafted to evaluate large language models across age-specific vulnerabilities—such as emotional dependency, privacy risks, and imitation of harmful behavior—with 1,283 targeted prompts across three age groups.
That said, SproutBench exists alongside similar child-focused tools—like Safe-Child-LLM (June 2025) and MinorBench (March 2025)—which also highlight how current models often fail to adequately refuse unsafe prompts from young users.
These frameworks signal a growing push for child-aware AI design, but so far, they are confined to academic and research settings. Industry adoption has yet to be publicly confirmed.
🔗 Source: ARXIV
🔞 AI Age Verification Tech Advances Amid Global Child Protection Push
AI age verification technology is moving from experimental to deployment-ready, potentially solving the enforcement challenge that has plagued digital child protection efforts.
Facial recognition AI can estimate ages of 13-24 year olds within 2-year accuracy
Technology trained on millions of faces to verify online age requirements
Represents shift from honour-system age verification to technical enforcement
Addresses growing global regulatory demand for child protection online
Raises privacy concerns about biometric data collection from minors
The global movement to protect children online has helped create significant advances in AI-powered age verification technology. Companies like Yoti are deploying facial recognition systems that can estimate the age of users between 13-24 within a two-year margin of accuracy. This represents a substantial leap from the current honour-system approach that relies on users self-reporting their age.
The technology works by capturing selfies and using artificial intelligence algorithms trained on millions of faces to analyse facial features indicative of age. While the accuracy range of two years might seem broad, it's sufficient for most regulatory purposes distinguishing between a 12-year-old and a 14-year-old, for instance, even with the margin of error.
This technological development arrives at a crucial moment as governments worldwide implement increasingly strict requirements for age verification on digital platforms. However, the solution introduces new considerations around biometric privacy, data storage, and the potential for discrimination in algorithmic assessment. Parents will need to weigh the child protection benefits against concerns about their children's biometric data being processed and stored by third-party verification services.
How do you feel about their face becoming their password?
In The Know
Japanese town putting a two hour limit per day on smartphones
Basic Phones: A Guide for Parents - not all phones are available in every country so check if compatible with where you live
Spotify is adding DMs
AI usage plummets in the summer when school’s out
….and not surprisingly, AI tools may weaken critical thinking skills by encouraging cognitive offloading, study suggests
For more articles from the week, head over to Wired-Parents.com
Apps You Need To Know About
⚠️ The Apps Flying Under Your Parental Radar

Image credit: KidsOnlineWorld
While you're monitoring Instagram and TikTok, your children are three platforms ahead on apps you've never heard of.
NGL, recently fined $4.5 million by the FTC, generates fake anonymous messages to keep children engaged—while facilitating cyberbullying. Secret Calculator apps, used by 42% of teens, look like innocent math tools but hide photos, videos and communications in digital vaults.
A UK resource called Kids Online World has a comprehensive list of apps actually popular with children today. While some are "incredibly popular", others are "incredibly dangerous" and those categories often overlap. They will tell you whether to treat with caution or avoid at all costs (spoiler: none are totally safe).
The real problem isn't just staying current with new apps. It's that children don't always abandon platforms, they layer them. Instagram for public image. Snapchat for close friends. Discord for gaming. NGL for anonymous feedback. Each serves a specific social function most parents don't understand.
We've identified three categories of apps flying under parental radar: anonymous interaction platforms, stealth apps disguised as calculators or utilities, and niche community spaces where kids connect around specific interests. Each category carries distinct risks and serves distinct needs.
Rather than playing digital whack-a-mole, there's a better approach to understanding what drives kids toward these hidden platforms and how to address those needs more safely.
🆘 Wired Extra: This Week's Quick Hits
✅ Word Your Kid Probably Knows: “Mid”
Something that’s mediocre or not worth the hype.
🔧 Tech Tip for Tired Parents: Disable Autoplay
Turn off autoplay on YouTube and Netflix to reduce passive watching.
📴 Offline Challenge of the Week: Time Capsule
Create ‘2025 Memory Box’ together filled with physical items. Here are some ideas.
Et cetera

Photo by Steve Adams on Unsplash
🥱 You need to be bored. Here’s why.
📦 Time capsule sealed by Princess Diana in London opened up after 34 years
🎁 The adults addicted to blind box toys
💁♀️ 100 tricks to sort our your life
PUZZLES & TIPS
Answers To Last Week’s Brain Teasers
Guess the next three letters in the series GTNTL.
I, T, S. The complete sequence is the first letter of every word in the sentence.
What tastes better than it smells?
A tongue.
Congrats to Colleen, New York for the correct answers!
This Week’s Brain Teasers
No matter how little or how much you use me, you change me every month. What am I?
What runs all around a backyard yet never moves?
Answers next week!
SHARING IS CARING
No Parent Left Behind
If you think another parent could benefit from staying informed about global digital parenting developments, please forward this email so they can subscribe.
Every parent deserves access to balanced information to make confident decisions for their family.
Or copy and paste this link to others:
Get Plugged In with Wired Parents.
We track digital parenting developments globally so you don't have to.
Every Thursday, we deliver what matters most from dozens of sources across multiple countries. We present all perspectives so you can confidently choose your approach - delay, manage, allow, or refuse.
Your Digital Parenting News Filter.
Were you forwarded this email? Sign up