
This Week: UK Asks Parents; US Retreats; Chatbots Unchecked
Another Thursday, lots happening around the world and your next newsletter.
This week, the UK formally opened its consultation on whether to ban social media for under-16s and for the first time, governments aren't just debating this, they're asking parents directly. More below.
You'll also find out what the US Kids Off Social Media Act would actually mean for your child's school, why AI chatbots are the regulatory blind spot nobody has fixed yet, and why the US federal regulator just stepped back at exactly the moment the rest of the world is stepping forward.
Five minutes. Everything you need.
— Heidi
Share Wired Parents with a parent who'd want to know this. Subscribe here
COUNTRY TRACKER
What’s happening around the world this week.

Our weekly round-up of what's happening around the world.
More than a dozen countries have now passed or formally announced bans restricting children's access to social media. Several more are actively pursuing legislation. Here's the latest movement.
🇵🇱 POLAND — The government is preparing a bill that would ban children under 15 from using social media. Education Minister Barbara Nowacka cited deteriorating mental health and declining intellectual capacity among young people. Legislation is expected to come into force early next year, making Poland one of the largest European countries — 38 million people — to join the movement.
🇫🇷 FRANCE — The lower house of parliament passed a bill in late January banning social media for under-15s, also banning smartphones in high schools. It still needs to clear the Senate, with a vote expected in the coming weeks. President Macron has publicly backed the measure and requested it be fast-tracked.
🇪🇸 SPAIN — Announced ban for under-16s in February with criminal liability for platform executives who fail to remove illegal content. Implementation timeline not yet specified, with draft legislation expected later this year.
🇵🇹 PORTUGAL — Parliament approved a bill in February setting 16 as the minimum age for independent social media access. Children aged 13 to 15 can access platforms with verified parental consent.
🇧🇷 BRAZIL — A law requiring age verification and parental linking for under-16s takes effect this month. Platforms that fail to comply face fines of up to 50 million reais.
🇩🇰 DENMARK — Government has secured cross-party agreement to ban social media for under-15s, with the law potentially coming into force by mid-2026.
🇳🇴 NORWAY — Raising the minimum age for independent social media use from 13 to 15, with age verification requirements.
🇬🇷 GREECE — Close to announcing a ban for under-15s, with legislation expected shortly.
🇸🇮 SLOVENIA — Government drafting legislation to prohibit under-15s from accessing social media.
🇲🇾 MALAYSIA — Ban for under-16s in force since January, using national ID infrastructure for age verification — one of the most technically robust enforcement models implemented so far.
🇩🇪 GERMANY — Chancellor Merz's party backing a ban for under-16s, with a parliamentary committee due to report in autumn 2026. Coalition partners showing some hesitance about an outright ban.
Find out more over at Wired-Parents.com
First a word from our sponsor:
Better results don't always require more supplements.
Most supplement routines are built on guesswork.
SuppCo fixes that. They help you audit product quality, optimize your budget, and track the exact nutrient density of your stack. You take the right dosages from trusted sources. Nothing more, nothing less.
Plus, their core features are free. Not a trial. Just free.
Download the SuppCo app to see what you're actually taking. Then start building a routine that makes sense for your body and your budget.
Your workouts aren't random. Your supplements shouldn't be either.
NEED TO KNOW
The UK Consultation Is Now Open. Here's What's Actually Being Decided

The UK government formally launched its public consultation on March 2, publishing "Growing Up in the Online World" — a document that sets out what it's considering and invites responses from parents, young people, schools, and civil society. The consultation runs until May 26, 2026. The government has committed to publishing its response in summer 2026 and has secured new legal powers to act on its findings within months, rather than waiting for new primary legislation.
That last point matters. Previous government commitments on children's online safety have moved slowly. This time, the legal framework to act quickly is already in place.
What's actually being proposed
The consultation covers several distinct questions: whether to introduce a minimum age for social media access, likely set at 16; whether to restrict addictive design features such as infinite scrolling regardless of whether an age ban is introduced; how to improve age verification; whether to limit children's access to VPNs used to bypass restrictions; and automatic data preservation orders when a child dies online — a measure long sought by bereaved families.
It also covers AI chatbots and gaming platforms — more on chatbots below.
The political backdrop
The House of Lords voted 261 to 150 last month to support an amendment to the Children's Wellbeing and Schools Bill that would ban under-16s from social media outright, backed across party lines. The government opposed it and lost. When the bill returns to the Commons, the government will try to overturn the amendment — but with significant numbers of Labour MPs in favour of a ban, and opposition leader Kemi Badenoch also publicly backing one, that may prove difficult.
The disagreement that matters
The most important tension isn't between those who want a ban and those who don't. It's between two groups who both want to protect children but disagree on how.
On one side: Esther Ghey, mother of Brianna Ghey, and the Lords majority, who argue the evidence of harm is clear enough to act now.
On the other: Ian Russell, father of Molly Russell, who took her own life after viewing harmful content on Instagram, who argues the government should enforce existing laws rather than introduce new ones. A coalition of 42 UK child protection charities — including the NSPCC and the Molly Rose Foundation — argue a blanket ban is a blunt response that risks pushing children into less regulated spaces.
The NSPCC's position is worth understanding carefully because it isn't a defence of platforms. Their argument is that a ban treats the symptom rather than the cause, and that the government should instead force platforms to keep under-13s off entirely, stop addictive design for teenagers, and use technology to prevent harmful content at source. If the government doesn't do those things, a ban would be better than nothing — but it shouldn't have to come to that.
What this means for parents
If you're in the UK, the consultation is open at gov.uk until May 26 and you can submit your views directly. The government is genuinely asking.
If you're outside the UK, the question this consultation is wrestling with is the same one parents everywhere are asking: is the problem access, or design? A ban addresses access. It doesn't change how platforms are built. The UK is finding, as Australia did, that the two questions can't really be separated.
Read the full background: UK Social Media Ban Consultation 2026 →
NEED TO KNOW
What The US Kids Off Social Media Act Would Actually Mean

The Kids Off Social Media Act is moving through Congress with bipartisan support — unusual in the current political environment and a signal of how broadly concern about youth social media use has spread. The Senate Commerce Committee approved it, the House has backed it, and 40 state attorneys general have written to Congress urging passage.
What it would do:
Under-13s would be prohibited from creating or maintaining social media accounts. Platforms that knowingly allow them to would face FTC enforcement and state attorney general action.
Under-17s would be protected from algorithmic feeds — platforms could not use personalised recommendation systems to push content to users under 17. Teenagers could still search for content and see posts in chronological order. What they couldn't receive is the algorithmically-targeted feed designed to maximise time on platform.
The school provision
Most US public schools receive federal funding called E-Rate, which subsidises their internet and technology costs. Under the bill, any school receiving that funding would be required to actively block social media on their networks and devices. For most schools, losing E-Rate funding isn't an option — which means compliance wouldn't be optional in practice.
Recent research found American teenagers averaging 70 minutes of phone use daily during school hours despite 99.7% of schools already having restriction policies. The question is whether a legal requirement tied to funding produces different results — or whether enforcement remains the unsolved problem it already is.
What hasn't passed yet
The bill has cleared committee and has House backing but has not been signed into law. Legal challenges are expected on First Amendment grounds, and the tech lobby remains significant. But the bill has stronger bipartisan momentum than any previous federal attempt.
What this means for parents
The algorithmic feed restriction for under-17s is arguably more significant than the age minimum. Removing the personalised recommendation system would change the fundamental experience of social media for teenagers — not the content they can access, but the mechanism that decides what they see. That's the change platforms have resisted most, because it's the mechanism most directly tied to engagement and revenue.
NEED TO KNOW
The AI Chatbot Loophole Nobody Has Fixed Yet

There is a gap in almost every children's online safety law currently in force, and it covers one of the fastest-growing ways children are spending time online.
AI chatbots are not covered by the UK's Online Safety Act. The reason is technical: the Act applies to user-to-user services and search services. A chatbot that interacts with one person at a time and doesn't share information with other users falls outside its scope entirely.
The UK government confirmed this week it will close that loophole through an amendment to the Crime and Policing Bill. The prompt was Grok — Elon Musk's AI chatbot — which researchers found generated approximately 3 million sexualised images in less than two weeks in January, including around 23,000 that appeared to depict children. Ofcom confirmed it was not investigating because chatbot activities fell outside its remit.
The loophole isn't unique to the UK. California became the first US state to regulate AI companion chatbots for minors when SB 243 came into force in January — requiring chatbots to disclose they aren't human, provide crisis resources if a child mentions self-harm, and include prompts to take breaks. At federal level, the SAFE BOTs Act has been proposed but not passed.
In the meantime, AI companion apps specifically marketed to children are operating with no equivalent of the safety obligations that apply to social media platforms in most of the world.
What this means for parents
The safety frameworks parents rely on — platform reporting, age verification, content moderation — don't apply to the AI chatbot your child may be using daily. If your child uses one, the questions worth asking are: does it know it's talking to a child? Does it have any obligation to respond differently if it does? Does it have crisis protocols if your child mentions self-harm? For most commercially available chatbots right now, the honest answer to all three is: not necessarily.
Read more: AI Chatbots and Children: What You Need to Know →
WORTH KNOWING
The US federal regulator just stepped back
While governments around the world are accelerating restrictions on children's social media use, the Trump administration's FTC is moving in the opposite direction. The agency has backed away from social media regulation for children despite its own data showing nearly 20% of American children are online for four or more hours daily, and despite internal findings that children's compulsive phone-checking mirrors the behavioural patterns of smokers. Federal action in the US is stalling at precisely the moment state and international action is accelerating. For parents watching Washington, the patchwork of state laws, with their inconsistent enforcement and ongoing legal challenges, remains the reality for the foreseeable future.
Instagram will now alert parents to suicide and self-harm searches
Meta announced last week that Instagram will notify parents when a teen repeatedly searches for terms related to suicide or self-harm within a short period. Alerts go via email, text or WhatsApp ,but only to parents already connected through Instagram's supervision settings. The timing is hard to ignore: Meta's CEO was in a Los Angeles courtroom the same week, facing a trial over whether Instagram deliberately addicted young users. The feature has real limitations. It only reaches families already using supervision tools, which is a small minority. Meta hasn't specified what "repeatedly" means or how short "a short period" is. If your teen is on Instagram and you're not using supervision settings, this alert won't reach you.
Discord changes its defaults this month
Starting March 2026, Discord treats every account as a teenager by default unless the user actively verifies they are an adult. Sensitive content is automatically blurred for unverified accounts. For parents of children already on Discord, this is a meaningful change that happens without any action required on your part. Discord's Family Center has also been updated to provide weekly activity summaries showing who your teen has messaged, which servers they've joined, and who they've added as friends.
New data: the AI users parents may not know about
Two nationally representative surveys published this week give the clearest picture yet of how young people are using AI. Around 3 in 10 teenagers use AI daily — but 64% of teenagers say they've used it while only 51% of parents think they have. The teenagers most likely to be turning to AI for emotional support are those with the lowest levels of offline support: kids who have been bullied, face discrimination, or don't have trusted adults to turn to. The technology isn't creating the vulnerability. It's finding it. The Surgo Health data includes a short quiz to help identify which type of AI user your child might be at surgohealth.com.
Full analysis: The AI Users Parents May Not Know About →
📩 From the inbox
"We have a no-phones-at-dinner rule, but my husband checks his constantly and the kids have started calling him out on it. I agree with them, but I don't want to undermine him. How do other families handle this?"
— Sarah, mum of three, Ohio
You're not alone. In many homes, the tension isn't kids breaking tech rules — it's adults bending them.
Children care deeply about fairness. If a rule applies to them but not to us, they'll notice. And increasingly, they'll say so.
What actually works for other families?
Name the exception. If a parent genuinely needs to be reachable, say that upfront: "I may need to check this once." Transparency reduces resentment.
Revisit the rule privately. A quick reset between adults — "Are we modelling what we're asking?" — avoids public correction.
Make it tangible. Some families use a phone basket or set a short, clearly device-free dinner window.
The goal isn't perfection. It's consistency. When children see adults protecting shared time too, the rule feels fair and fairness is what they're really testing.
Got something you've been wondering about? Hit reply.
As always, please get in touch with us at [email protected] with any feedback, thoughts, suggestions.
Know a parent who would find this useful?
Forward this emailOr share this link: https://wiredparents.beehiiv.com/
📚 New here?
The Download is our free guide to the eight technology decisions parents ask most — from first phones to Instagram, gaming and AI.
Get The Download →Worth a Read: Your guide to safer kids online.
Stay ahead. Every Thursday.
Were you forwarded this email? Sign up here



