
Happy Thursday everyone, Happy Chinese New Year to those that celebrate and welcome to this week's edition of Wired Parents.
This Week: UK Regulates AI; Spain Probes Platforms; Kids Befriend Bots
Europe escalated enforcement against AI-generated child abuse material this week. The UK brought AI chatbots under the Online Safety Act on Monday. Spain launched a criminal investigation against X, Meta, and TikTok on Tuesday. Meanwhile, new research shows 31 per cent of children aged 11-16 consider AI chatbots to be friends, with 86 per cent acting on bot advice.
This week is also the final part of our series Can Childhood Survive Social Media. Read it from the beginning →
There's a lot to unpack, so grab a coffee and settle in.
As always, get in touch if there's anything you'd like us to cover. Till next week,
—Heidi
Share Wired Parents with a parent friend trying to make sense of all this. Subscribe here
COUNTRY TRACKER
Our weekly round up of what’s happening around the world. More countries move on children's social media restrictions.

Two more European countries moved forward on social media restrictions for children this week.
🇵🇹 PORTUGAL — FIRST READING PASSED, 12 FEBRUARY
Portugal's parliament approved a bill on its first reading on 12 February, requiring explicit parental consent for children aged 13 to 16 to access social media. The bill sets 16 as the minimum age for autonomous access to platforms including Instagram, TikTok, and Facebook, with 13-to-16-year-olds permitted access only with express and verified parental consent. Consent will be managed through a national system called the Digital Mobile Key (DMK), which will also help enforce the existing ban on under-13s accessing social media, video- and image-sharing platforms, and online betting sites.
The bill passed 148 votes in favour, 69 against, and 13 abstentions, and can still be amended before the final vote. Fines for non-compliant platforms can reach two million euros or two per cent of annual global turnover.
Portugal joins France (under-15 ban approved by the lower house in January), Australia (under-16 ban in force since December), and a growing list of countries that have moved from debate to legislation.
🇫🇮 FINLAND — PM BACKS UNDER-15 BAN
Finland's Prime Minister Petteri Orpo said in January that he supported banning social media use for children under 15, citing long screen time as one of the biggest obstacles to getting children moving more. Two-thirds of Finns surveyed back a ban or restriction on social media for under-15s, a near ten percentage point increase compared to a similar survey conducted last summer. Orpo's office is preparing a preliminary report on the matter, with the government aiming to proceed as quickly as possible before the legislative term ends in spring 2027.
Find out more over at Wired-Parents.com
First a word from our sponsor:
Smart starts here.
You don't have to read everything — just the right thing. 1440's daily newsletter distills the day's biggest stories from 100+ sources into one quick, 5-minute read. It's the fastest way to stay sharp, sound informed, and actually understand what's happening in the world. Join 4.5 million readers who start their day the smart way.
NEED TO KNOW
Europe Moves From Regulation To Criminal Enforcement

UK closes the loopholes
Prime Minister Keir Starmer announced Monday that AI chatbot providers including ChatGPT, Google Gemini, and Microsoft Copilot will be brought under the Online Safety Act, required to comply with illegal content duties or face fines and potential blocking. The regulation extends to setting minimum age limits for social media platforms, restricting features such as infinite scrolling, and limiting children's use of AI chatbots and access to VPNs.
The announcement follows weeks of Grok generating sexualised images of women and children on X, prompting Ofcom to launch a formal investigation. "The action we took on Grok sent a clear message that no platform gets a free pass," Starmer said. "Today we are closing loopholes that put children at risk, and laying the groundwork for further action."
Technology Secretary Liz Kendall told LBC she wanted to make an announcement on an under-16 ban by summer, with the government working to act "within months, not years" following a three-month public consultation beginning in March. The new measures will be introduced as an amendment to existing crime and child-protection legislation being considered by Parliament.
Spain launches criminal probe
Twenty-four hours later, Spain ordered prosecutors to investigate X, Meta and TikTok for allegedly spreading AI-generated child sexual abuse material. Prime Minister Pedro Sánchez wrote on X Tuesday: "These platforms are undermining the mental health, dignity, and rights of our children. The state cannot allow this. The impunity of these giants must end."
The investigation follows a technical report from three ministries. One in five young people in Spain—mostly girls—say fake nude images of themselves were created by AI and shared online while they were minors, according to Save the Children.
Separately on Tuesday, Ireland's Data Protection Commission opened a formal investigation into X's xAI chatbot Grok over processing of personal data and its potential to generate harmful sexualised images and video, including of children.
The scale of the problem is accelerating. The Britain-based Internet Watch Foundation flagged 3,440 AI videos of child sexual abuse in 2025 compared with only 13 in 2024.
Why this week matters
Starmer said the previous government took eight years from the original idea to get the Online Safety Act passed, adding "we can't have eight years when it comes to social media". The UK isn't just targeting access—it's targeting the design features that make platforms addictive, the AI systems that blur reality, and the technical workarounds that undermine age restrictions.
Spain isn't waiting for regulatory fines. It's pursuing criminal investigation, which raises the stakes from corporate penalties to potential individual accountability.
This is the most comprehensive week of action any major countries have taken since Australia's ban passed in December.
NEED TO KNOW
Children Treating AI Chatbots As Friends

New research from Vodafone surveying 2,000 children aged 11-16 found that 31 per cent of those who have used an AI chatbot feel like it is one of their friends, with 37 per cent confiding in chatbots and one in three sharing something they wouldn't tell parents, teachers, or friends.
Eighty-one per cent of children aged 11-16 use AI chatbots, with 42 per cent using them daily. The reasons for use are expanding beyond homework. Twenty-three per cent seek advice on friendships, 16 per cent discuss mental health concerns, and 86 per cent have acted on advice given by a chatbot.
Features such as always being available (51 per cent) and having a consistently friendly tone (37 per cent) are ranked as the top reasons for engaging with chatbots, with 17 per cent saying speaking to technology feels safer than speaking to a person. Children spend an average of 42 minutes per day chatting with AI.
More than half—56 per cent—feel AI chatbot interactions can sometimes blur the line between what's real and what's not, with boys more likely than girls to view bots as friends (41 per cent versus 24 per cent).
Child psychologist Dr Elly Hanson, quoted in the Vodafone research, said the concern isn't that chatbots exist but that children are forming pseudo-relationships with systems designed to keep them engaged rather than challenge them. "They need real relationships involving give-and-take, shared experience, diverse perspectives, and actual feelings, not pseudo-relationships designed to keep them hooked for as long as possible."
The Vodafone findings connect directly to last month's Northwestern University research showing quality friendships protect teen mental health. The problem isn't just that platforms are harmful—it's that children are replacing real friendships with AI systems that mimic empathy without providing the developmental benefits of actual human connection.
Read more: Children Are Treating AI Chatbots As Friends →
MINI SERIES
PART FOUR: Why Every Approach Involves Tradeoffs - And What That Means For Your Decisions

Over four weeks, we're examining every major approach to protecting children from social media.
Week 1: Government bans like Australia's. What they protect. What they require.
Week 2: Parental opt-out. What it covers. What it doesn't.
Week 3: Platform regulation. What it can change. What it can't.
Week 4 (now live): Why no single approach solves everything and what your options are.
Over the past three weeks, this series has examined the three main approaches governments, parents, and advocates are pursuing.
Government bans — the approach Australia took in December, that Portugal has now moved towards, and that more than a dozen other countries are actively considering — protect all children, including those from families where parents are absent, disengaged, or unable to navigate the systems themselves. The cost is a surveillance and identity-verification infrastructure that will outlast any particular government, and that restricts teenagers who are using social media thoughtfully without obvious harm.
Parental opt-out — keeping children off platforms through household rules — preserves privacy completely, requires no government database, and can be calibrated to an individual child's maturity. The limitation is that it only protects your child. It does nothing for the children in your child's class whose parents won't or can't make the same choice, and whose online behaviour will shape your child's experience regardless.
Platform regulation — requiring changes to design, algorithms, and age-verification from the platforms themselves — addresses the structural features that make social media harmful rather than simply restricting access to something that might otherwise be beneficial. The difficulty, as this series has examined, is that compliance is expensive to enforce, platforms have strong incentives to find workarounds, and the fundamental business model of engagement-optimised content remains intact even after surface-level changes.
None of these approaches is wrong. None of them is sufficient. And the Spotify story (see below) is a useful reminder that whichever approach you're taking, the terrain keeps shifting. New social features appear inside apps that weren't previously social, platforms evolve faster than legislation and the decision you made two years ago may need revisiting.
What this series has tried to do is give you the information to make those decisions with context rather than anxiety, and with an understanding of what each choice actually costs.
If you missed earlier parts:
Read Part One: When Governments Ban Social Media for Children →
Read Part Two: Why Saying No to Social Media Only Protects Your Child →
Read Part Three: Can We Regulate Social Media Platforms Into Being Safe For Children? →
WORTH KNOWING
👥 Quality friendships matter most for teen mental health
Northwestern University research tracking 388 adolescents over five years found that having just one or two close, supportive friendships significantly reduced symptoms of social anxiety and depression, while having many superficial friendships showed no protective effect. The research suggests parents should focus less on whether their teenager has many friends and more on whether they have meaningful connections characterised by emotional support and trust. The timing matters. As the Vodafone research shows 31 per cent of children treating AI chatbots as friends, this research underscores what's actually protective isn't the number of connections but their depth and reciprocity.
🤖 AI companions increase loneliness
A four-week study found that individuals who reported heavy daily use of AI companion chatbots experienced increased loneliness, greater dependence on the technology, and reduced real-world socialising. Seventy-two per cent of children aged 13-17 have tried AI companions, with about one in three using them for social interaction including friendship and romantic relationships. The research suggests the problem isn't occasional AI use for homework but sustained engagement with systems designed to simulate relationship without the give-and-take that makes human connection developmentally valuable.
🎶 Spotify quietly became a social platform Spotify added two features on 7 January that show how far beyond music the app has shifted. "Listening activity" now lets friends see what you're streaming in real-time, while "Request to Jam" creates live listening sessions together. Nearly 40 million users have sent 340 million messages since the Messages feature launched in August 2025, with daily active users more than doubling year over year. Messages officially requires age 16 and over, but relies on self-reported birthdates most teens don't enter honestly, leaving 13-15-year-olds in a gap where age restrictions don't work. If Spotify is in use at your house, it's worth checking what they're using beyond music and whether the privacy settings reflect conscious decisions rather than untouched defaults.
📩 From the inbox
"Our family rule is no social media until 15, but my 13-year-old spent the weekend at a friend's house and came back talking about TikTok videos they watched together. How do I handle this without making it weird with the other parents?"
You probably can't avoid some level of awkwardness, but it's worth it. The simplest approach is direct: "We're holding off on social media for now—would you mind if she doesn't use it when she's at your house?" Most parents will respect that, some will think you're overprotective, and a few will secretly be relieved someone else said it first. The harder truth is that unless you're willing to limit where your child visits, you can't fully control what happens at other people's houses. What you can control is the conversation afterwards, not as punishment, but as a chance to talk about what she saw and whether it changes anything about your family's approach.
Would be keen to know how other families manage this. Hit reply and let us know.
As always, please get in touch with us at [email protected] with any feedback, thoughts, suggestions.
Know a parent who would find this useful?
Forward this emailOr share this link: https://wired-parents.com
📚 NEW TO WIRED PARENTS?
Get the free 103-page Age-by-Age Tech Guide: See what parents worldwide are deciding about phones, social media, screen time and gaming at ages 8-17.
Download Free Guide →Worth a Read: If you have an older child and are thinking about their future in an AI world, this is a newsletter definitely worth following.
Technology decisions shape childhoods. Make yours deliberately.
A weekly newsletter on smartphones, social media, gaming and screen time worldwide. Every Thursday.
Were you forwarded this email? Sign up here



