In partnership with

TL;DR: Spotting Fake News; YouTube Shorts Guide; Roblox Requires Face Scans

To our American readers, Happy Thanksgiving! 🦃

As families gather around tables this week, conversations will inevitably turn to current events and politics. But can any of us (teenagers or adults) actually tell truth from fiction online? Research suggests neither generation has this figured out. When the debates start this Thursday, perhaps the real question isn't who's right, it's how any of us know what's true from what we read on the internet.

In this week's Plugged In, we also tackle YouTube Shorts, the feature hiding inside the YouTube app that many parents don't fully understand until they've watched their child lose an hour to it. Our complete guide explains what it is, what the research shows and your three options.

Also this week: Roblox just announced that starting January, users will need to scan their faces to access chat features. The move comes amid 35+ lawsuits over child safety—but it's raising new questions about biometric data collection from children.

And if relatives are asking what to buy your child this holiday season, you might want to steer them away from AI-powered toys. More than 150 child development experts just issued a warning about chatbot-enabled toys marketed to children as young as two.

Here's what parents need to know.

Enjoying Plugged In? Forward this to a parent friend who's also trying to navigate digital parenting decisions. They can subscribe here

NEED TO KNOW

AI Toys Warning: What Parents Need to Know This Holiday Season

Earlier this week, OpenAI suspended a toymaker after its Kumma bear (an AI-powered teddy that talks to children) provided inappropriate content to kids, including instructions on finding knives and lighting matches.

The incident prompted more than 150 child development experts to issue a warning on 20 November about AI toys marketed to children as young as two. These aren't toys with pre-recorded phrases. They're embedded with chatbot technology, the same AI that interacts like ChatGPT, but marketed as your child's "best friend."

Examples include Miko (£300-400/$400-500), Loona Petbot (£300/$400), and Smart Teddy (£40-70/$50-90). They maintain constant internet connections, use speech recognition, and collect extensive data about your child and family.

MIT professor Sherry Turkle: "There is only harm when a child has an AI 'friend.' The threat is existential."

Pediatric surgeon Dana Suskind explains that traditional play requires children to invent both sides of pretend conversations—practising creativity and problem-solving. "An AI toy collapses that work. We don't yet know the developmental consequences."

The University of Cambridge is conducting the UK's first systematic study on how children interact with these toys, with results expected early 2026.

Until we know for certain, here are the five main concerns experts have raised:

  1. They use technology that has harmed teenagers – the same AI linked to obsessive use and explicit conversations

  2. They exploit children's developmental trust – young brains can't distinguish AI companions from real ones

  3. They may undermine imaginative play – outsourcing creativity to AI rather than developing it

  4. They collect extensive data – audio recording, speech-to-text, sometimes video and facial recognition

  5. They replace human interaction – displacing the relationships children need to thrive

The full analysis includes conversation scripts for redirecting gift-giving grandparents, industry response, and what to consider when making toy decisions.

Read more: Wired Parents

After Babel (the Substack run by Jonathan Haidt, The Anxious Generation) has published a couple of articles on the subject:

Roblox Now Requires Face Scans for Chat

Starting January, Roblox users will need to verify their age through facial scanning or ID upload to access chat features. No verification? No chat, but games still work fine.

What's Happening:

Users scan their face for a few seconds and then AI estimates their age and sorts them into brackets: under 9, 9-12, 13-15, 16-17, 18-20, and 21+. You can only chat with your age group unless you add someone as a "trusted connection." For kids under 9, chat is switched off by default.

Roblox says facial images will be deleted immediately and the rollout started this week on a voluntary basis, becoming mandatory in Australia, New Zealand, and the Netherlands in December and then goes global in January 2026.

Why Now?

Roblox is facing 35+ lawsuits from families alleging the platform enabled predators to groom children. The platform reported more than 13,000 child exploitation instances in 2023. Countries including the Netherlands, Iraq, and Malaysia have launched investigations or are weighing bans.

The Concerns:

The AI isn't perfect. Adults are being classified as preteens, teenagers as under-9. Roblox admits the system is only accurate "within one or two years", meaning a 12-year-old could easily land in the wrong bracket.

Privacy advocates note that faces can't be changed like passwords. Whilst Roblox promises immediate deletion, asking children to scan their faces into an AI system will make quite a few parents uncomfortable.

Siblings in different age groups can't chat whilst playing together but Roblox say they are working on fixes for families, expected early next year.

The Trade-Off:

With 13,000 exploitation reports the safety problem is real and genuinely alarming. But the solution requires biometric data from children. You've got until January to decide: verify and keep chat, or skip it entirely.

Make Every Platform Work for Your Ads

Marketers waste millions on bad creatives.
You don’t have to.

Neurons AI predicts effectiveness in seconds.
Not days. Not weeks.

Test for recall, attention, impact, and more; before a dollar gets spent.

Brands like Google, Facebook, and Coca-Cola already trust it. Neurons clients saw results like +73% CTR, 2x CVR, and +20% brand awareness.

YOUTUBE SHORTS

YouTube Shorts: What Parents Need to Know About the Hidden Feature Hijacking Your Child's Attention

You handed your child your phone for "just a few minutes" whilst you finished grocery shopping. When you looked up, 45 minutes had passed. They were still swiping, eyes glazed, watching an endless stream of content you didn't recognise.

"Just one more," they said when you tried to take the phone back.

This wasn't YouTube as you remember it. No 10-minute educational videos about dinosaurs or baking tutorials. Just rapid-fire, 15-60 second clips. Swipe, watch, swipe, watch, swipe.

This week on the blog, we break down what YouTube Shorts actually is, why it's different from regular YouTube and how to approach it in your family.

YouTube Shorts Isn't Regular YouTube

YouTube Shorts launched in 2021 as YouTube's answer to TikTok. It's a feature within the regular YouTube app that serves vertical, short-form videos in an endless, swipeable feed.

Traditional YouTube has natural stopping points meaning you search for content, videos have defined lengths and you decide whether to watch another. Shorts operates differently: an algorithmic feed automatically serves you the next video. There’s no searching, no choosing, no stopping points.

The critical problem for parents: content flies by so quickly that you can't track what they're consuming. Ten minutes of Shorts might mean 20-40 different videos. And unlike TikTok, which parents often monitor closely or restrict entirely, YouTube Shorts lives inside the YouTube app, the same app you might have already approved for educational content.

What the Research Shows:

A 2024 study of 2,300 children found that those using short-form video 60+ minutes daily showed measurable decreases in classroom attention, declining reading comprehension, and increased homework completion time with the average session length for YouTube Shorts among 8-14 year olds being 47 minutes.

Children could recall specific details from only 12% of Shorts watched, compared to 76% recall for traditional YouTube videos. Children using short-form video more than one hour daily were 3.2 times more likely to report "extreme boredom" during unstructured activities because they had lost tolerance for it.

Some Shorts are educational. But a 2024 content analysis found that even starting from educational content, the algorithm typically led to entertainment or sensational content within 10-15 videos. The format undermines learning because effective education requires focused attention, time for information to consolidate and reflection time. Shorts provides none of these.

Your Three Options:

Unlike WhatsApp or gaming voice chat where configuration creates a middle ground, YouTube Shorts has a simpler decision tree: disable it entirely, allow it with very strict time limits, or replace YouTube with curated alternatives.

Most child development experts recommend disabling Shorts entirely for under-13s. For the regular YouTube app (13+), there's no built-in way to disable Shorts completely. You need third-party parental control software to block YouTube Shorts specifically whilst allowing regular YouTube.

Some families allow Shorts with very strict boundaries: maximum 15-20 minutes per day, only during specific windows, with active supervision. This requires consistent enforcement through iPhone Screen Time or Android Digital Wellbeing.

If You've Already Allowed Shorts:

Many parents discover these issues after their child has been using Shorts for months. It's not too late, but expect resistance. The full guide includes specific conversation scripts, how to implement changes together, and what pushback to expect.

Unlike traditional YouTube, which can genuinely enrich your child's education, Shorts operates on TikTok's playbook: endless, algorithmic, optimised for addiction. These boundaries aren't about control, they're about protecting their developing brain's ability to focus, learn and tolerate the quiet moments where real thinking happens.

You shouldn’t have to decide in the dark

THE DEBATE

Can Young People Tell Truth From Fiction Online?

Teenagers spend an average of 127 minutes daily on TikTok. For many, it's now their primary news source. Yet only 2% of British children and young people have the skills to spot fake news. As families gather over the holiday period, conversations inevitably turn to current affairs and world events. Research suggests that neither teenagers nor their elders can reliably evaluate what's actually true.

The Overconfidence Problem:

One in three children aged 8 to 17 think all or most information on social media is true. Yet although young people often expressed high confidence in their judgements, they struggled to detect false information and responded passively – often ignoring rather than actively checking.

Adults aren't much better. At Stanford University, more than half of undergraduates concluded that an article from a hate group was "more reliable" than the mainstream American Academy of Pediatrics. Only 45% of UK adults are confident they can judge whether sources of information are trustworthy.

How They Actually Decide What's True:

Most young people don't fact-check by cross-referencing sources. Instead, they rely on gut instinct: "I trust her, so I can trust her post" or "the website looks professional, so it is trustworthy".

Students blindly trust search engines to put the most reliable results first, ignoring the sponsoring organisation and the article's author. Young people believe misinformation is only about world events and scams, making them vulnerable in other domains.

The Intergenerational Blind Spot:

Teens see grandparents as especially vulnerable to believing false information, but view their parents as more skilled than themselves. Both generations are vulnerable in different ways. Teenagers trust aesthetics and algorithms. Adults trust traditional-looking sources that may be misleading. Neither is as capable as they believe.

The Education Gap:

Ninety-four percent of US teens say schools should teach media literacy, yet only 39% received any instruction during 2023-24. In the UK, media literacy education is a "postcode lottery", often relegated to optional subjects or one-off assemblies.

Professional fact-checkers use "lateral reading" which means leaving the original site to search the organisation's name and determine where information comes from before reading it. But over a third of college students didn't use this technique even after instruction.

What Parents Can Do:

Rather than lecturing teenagers, try modelling behaviour: "Let me just check who runs this organisation..." Admit uncertainty: "I'm not sure about this source, let's look it up together." Question your own assumptions alongside theirs.

The Uncomfortable Truth:

Neither generation has this figured out. Digital nativity doesn't equal digital wisdom. Perhaps the real challenge isn't teaching young people to spot misinformation, it's admitting we're all vulnerable to it. The next time a family debate erupts over what's "really" happening, pause to ask: how do any of us actually know what's true?

Further Reading:

IN THE KNOW

For more articles from the week, head over to Wired-Parents.com

LOOKING AHEAD

IN TWO WEEKS, AUSTRALIA’S UNDER-16 SOCIAL MEDIA BAN TAKES EFFECT

  • It requires Facebook, Instagram, Snapchat, TikTok, X, YouTube, Reddit, Twitch, and Kick to prevent under-16s from holding accounts. Platforms face fines up to AU$50 million (£26m/$33m).

  • Meta began removing accounts from December 4. Snapchat started age verification this week using bank-linked ConnectID or facial scanning. However, significant questions remain: platforms haven't fully explained detection methods, gaming sites like Roblox remain exempt, and critics warn children will simply use VPNs or borrow family accounts.

  • This is the first real-world test of whether national age restrictions can work at scale. Malaysia announced plans to follow Australia's lead in 2026, whilst European countries are jointly testing age verification templates. The eSafety Commissioner will evaluate whether the ban achieves its goals—or creates unintended consequences.

THANKSGIVING TECH TRIVIA

  • TV Dinners Were Invented Because of Leftover Thanksgiving Turkey - In 1953, Swanson had 260 tons of unsold Thanksgiving turkeys. Inspired by airplane meal trays, salesman Gerry Thomas created the first TV dinner using the leftover turkey. They sold for 98 cents each and were an immediate hit. Thomas received a $1,000 bonus and a $300/month raise for the idea. (Source: Reader's Digest, The Pioneer Woman)

  • "Cyber Monday" Was Invented Because Home Internet Was Terrible - The term was coined in 2005 when retailers noticed people waited until they got back to work on Monday to shop online. Why? Because office internet was way faster than home broadband in the early 2000s. People would sneak shopping between spreadsheets because home connections were painfully slow! (Source: The Fact Site, Britannica)

  • Cyber Monday Now Beats Black Friday in Online Sales - In 2023, Cyber Monday hit a record $12.4 billion in US online sales, while Black Friday reached $9.12 billion online. The digital shopping day has officially overtaken its in-store predecessor. Globally, 73.1 million people shopped online on Cyber Monday. (Source: Hivenet, Skimlinks)

Know a parent who would find this useful?

Sharing is Caring

Or share this link: https://wired-parents.com


📚 NEW TO PLUGGED IN?

Get the free 103-page Age-by-Age Tech Guide: See what parents worldwide are deciding about phones, social media, screen time and gaming at ages 8-17.

Download Free Guide →

Get Plugged In with Wired Parents

We track what's happening with children and technology so you can make informed decisions for your family. Every Thursday: safety updates, new research, and what's happening worldwide.

What every parent in today's digital world needs to know.

Were you forwarded this email? Sign up here

Reply

Avatar

or to participate

Keep Reading