In partnership with

This Week: Meta’s Mess; Manosphere Manual; Instagram's (Lack Of) Protections

Happy Thursday everyone.

So, Meta had rather a bad week with two juries delivering verdicts against them. We get into the details below, but the thing that struck me most is that with both the cases argued, the problem is not what people post on these platforms but how the platforms themselves are built. The algorithms, the infinite scroll, the recommendation systems — the design. For years, social media companies have been legally treated as neutral hosts. But this week something significant happened, two separate juries said that is not what they are.

Some commentators are calling this social media's ‘Big Tobacco’ moment. I think the analogy is imperfect; tobacco is always harmful but social media can genuinely support connection and learning. The moment people started to see tobacco differently, behaviour changed. I wonder whether we are at the beginning of something similar, not necessarily a mass exodus, but a quiet recalibration. More and more parents are setting boundaries, more families are being intentional and more people are asking whether the way these platforms are designed is really serving them. I guess time will tell.

Just to let you know that Wired Parents will be taking a break next week so there will be no newsletter on 3 April. But we will be at at it again on 10 April.

In the meantime, grab your drink of choice, have a read and we’ll see you in two weeks.

— Heidi

Share Wired Parents with a parent who'd want to know this. Subscribe here

META

Two juries told Meta: Platform is NOT safe for children

In the space of 48 hours, two separate juries in two different US states found Meta liable for harm to children. Legal ground has shifted and the implications reach well beyond American courtrooms.

On 24 March, a New Mexico jury found that Meta violated the state's consumer protection laws by misleading users about the safety of Facebook, Instagram and WhatsApp and enabling child sexual exploitation on its platforms. The jury ordered Meta to pay $375 million in civil penalties and New Mexico became the first state in the US to prevail at trial against a major tech company for harming young people. A second phase of that trial begins on 4 May, where a judge will decide whether Meta must also change how its platforms operate.

On 25 March, a Los Angeles jury found Meta and YouTube liable in the KGM case, the first social media addiction lawsuit to reach a jury verdict. The plaintiff, a 20-year-old woman who began using YouTube at age 6 and Instagram at age 9, was awarded $6 million in combined compensatory and punitive damages. The jury found both companies acted with malice, oppression or fraud. Meta was assigned 70% of the liability, YouTube 30%.

TikTok and Snapchat, originally named in the Los Angeles case, both settled before trial, the terms for which have not been disclosed. Meta and Google have both said they plan to appeal.

What this means for your family right now: These verdicts do not change any platform features overnight, but they represent the first time juries have held social media companies liable for the design of their products — not just the content on them. Over 1,600 similar cases are pending across the US, and the California Attorney General has announced a separate trial against Meta set for August. Legal analysts are comparing the trajectory to the tobacco litigation of the 1990s. If the pattern holds, the pressure on platforms to redesign features aimed at young users will intensify significantly. In the meantime, the protections available to your family are the ones you set up yourself — which is exactly what our article below addresses.

Read the full analysis: A Jury Just Told Meta Its Platforms Aren't Safe for Children →

PARENT GUIDE

The Manosphere: where it lives and what you can do NOW

Last week I mentioned I was working on a guide to the manosphere — the loose network of influencers, podcasters and content creators promoting hyper-masculine, often misogynistic content to millions of young men online. That guide is now live on the website.

The manosphere is not one platform or one person. It operates across YouTube, TikTok, Instagram Reels, Snapchat Spotlight, podcasts, Telegram channels and gaming-adjacent Discord servers. The content ranges from fitness and financial motivation (which is how boys often find it) to explicit misogyny, sexual coercion advice and hostility toward women. The business model is provocation: more controversy generates more views, which generates more revenue. The worse the message, the better it performs.

Where it actually reaches boys:

The most common entry points are YouTube recommendation algorithms and TikTok's For You page, where short clips from podcast interviews are repackaged as motivational content. Boys searching for fitness routines, money-making advice or dating tips are algorithmically funnelled toward increasingly extreme material. Instagram Reels and Snapchat Spotlight carry the same clips. Telegram and Discord host the communities where the content is discussed and reinforced.

What to look for:

If your son is watching content that frames women as transactional, treats financial success as the only measure of worth, or mocks men who show vulnerability, he is likely consuming manosphere material. Phrases like "high-value man," "soft life," or references to specific influencers are markers. The guide on the website names the most prominent creators and the platforms where they are most active.

What you can do this week:

Ask your son what he watches on YouTube and TikTok. Not as an interrogation, but as a conversation. Ask what he finds funny, who he follows, what he thinks about the content. If you hear language or ideas that concern you, name what you are hearing and explain why it concerns you. Check the recommendation algorithms. Open your son's YouTube and TikTok accounts and look at what is being surfaced. The algorithm reflects what he has been watching, and it will tell you more than he might. Talk to your daughter too. She will encounter boys influenced by this content at school, online, and in relationships. She needs to know that these ideas exist, that they are widespread, and that they do not reflect what healthy relationships look like.

INSTAGRAM SETTINGS

How exactly is your child protected on Instagram at the moment?

Our first article addressed the fact that a jury has just told Meta that its platforms are not safe for children which raises obvious and concerning questions: if your child is on Instagram today, what protections are in place, and are they actually working?

Since mid-2025, every Instagram user under 18 has been automatically placed into a Teen Account. These accounts come with built-in restrictions: the account is set to private by default, messaging is limited to people your child already follows or is connected to, sensitive content is filtered to the most restrictive setting, and notifications are silenced overnight. For under-16s, these settings are locked — your child cannot change them without your permission. For 16 and 17-year-olds, the settings are on by default but teens can adjust them without asking.

In October 2025, Meta tightened things further. Instagram now uses a PG-13-style content framework, hiding posts with strong language, drug references or content encouraging risky behaviour. Teens cannot follow accounts that regularly share age-inappropriate content, and if they already follow one, they will no longer see that account's posts or be able to exchange messages with it.

How to set up supervision:

Supervision is separate from Teen Accounts. Teen Account protections apply automatically. Supervision is something you opt into, and it gives you more visibility. To set it up, go to Settings on your own Instagram account, tap Supervision, and send an invite to your child. They have 48 hours to accept. Once connected, you can see who your child follows, how much time they spend on the app each day, and you will be notified if they change privacy settings, report an account, or block someone.

Three things to do right now if your child is on Instagram:

Check their date of birth. If your child's account was set up with a fake birthday — and many were — the Teen Account protections may not be active. An account registered as belonging to an 18-year-old gets none of these safeguards. Go to their profile, tap Settings, then Account, then Personal Information, and confirm the birthday is correct. If it is wrong, changing it will trigger the teen protections.

Set up supervision. Even if Teen Account settings are active, supervision gives you visibility you would not otherwise have — who they follow, who follows them, and how long they spend scrolling. It takes two minutes.

Talk about what supervision can and cannot see. Supervision shows you contacts, time spent and setting changes. It does not show you messages. You cannot see the content of DMs, and your child knows that. This is worth being honest about. If a conversation worries them, they need to know they can come to you — supervision will not catch it for you.

Where the gaps are:

A report by former Meta employee Arturo Bejar, published in late 2025, tested Teen Account protections and found significant shortcomings. Teens were still being recommended content by adults they did not know and encouraged to follow them. The reporting tools were difficult for teens to use effectively. Disappearing Messages were actively promoted to teen accounts, which advocates say makes predatory contact harder to trace. Research published in early 2026 found that 60 per cent of teens aged 13 to 15 reported encountering harmful content despite the protections, and nearly 60 per cent said they had received unwanted messages, often from adults.

The other gap is structural. Your child can remove supervision at any time. Instagram notifies you if they do, but it cannot be prevented. And if your child created a second account with a different birthday — which is common — that account may have no protections at all.

None of this means Instagram is unusable. It means the protections are a starting point, not a finish line. Setting them up properly matters, as do conversations about what your child is actually experiencing on the platform.

When it all clicks.

Why does business news feel like it’s written for people who already get it?

Morning Brew changes that.

It’s a free newsletter that breaks down what’s going on in business, finance, and tech — clearly, quickly, and with enough personality to keep things interesting. The result? You don’t just skim headlines. You actually understand what’s going on.

Try it yourself and join over 4 million professionals reading daily.

WHAT THE WORLD DECIDED

🇮🇳 India — Two southern states joined the list of countries that have announced children's social media restrictions. On 6 March, Karnataka proposed banning social media for under-16s and Andhra Pradesh announced under-13 restrictions with a 90-day implementation deadline. India's central government has since signalled it will pursue a national law instead, with a three-tier, age-bracketed approach covering ages 8–12, 12–16, and 16–18. The law could be introduced during the monsoon session of Parliament.

See where every country stands on the Wired Parents Country Tracker, updated every Thursday →

WORTH KNOWING

The UK is testing social media bans, time limits and curfews on 300 teenagers. On 25 March, the UK government announced a six-week pilot across all four nations. Families will be split into four groups: a complete social media ban, a one-hour daily limit on Instagram, TikTok and Snapchat, a 9pm-to-7am curfew, and a control group with unrestricted access. Read more →

OpenAI and Common Sense Media joined forces on a California ballot measure for AI safety. The Parents & Kids Safe AI Act would require chatbot developers to apply protective settings for under-18s, ban child-targeted advertising, and prevent AI from simulating romantic relationships with minors. Families could sue for up to $5,000 per violation. Separately, the Children and Screens Evidence Council voted 7–2 in favour of requiring parental consent for under-18s to use AI companion chatbots.

Discord delayed its global age assurance rollout. The platform pushed its full global age verification expansion to the second half of 2026 after user backlash over privacy concerns. The teen-by-default safety settings rolled out in early March remain in place. Discord has committed to expanding verification options, requiring on-device biometric processing, and publishing documentation on all vendor partners before scaling. If your child uses Discord, their account should already be in restricted mode under the new defaults. Check that it is, and connect Discord's Family Center to see activity summaries. Read more →

The World Happiness Report made social media its entire theme. The 2026 report, published on 20 March, devoted every chapter to the relationship between social media and young people's wellbeing. The headline finding: heavy social media use is contributing to a significant decline in wellbeing among under-25s in English-speaking countries and Western Europe, with the effects particularly pronounced in girls. Jonathan Haidt and other contributors concluded that the major platforms, as currently designed, are not safe for adolescents. Read more →

LAST WEEK'S POLL

Last week I asked: WhatsApp now lets under-13s have managed accounts. Would you set one up?

Thank you to everyone who voted. The results:

  • 🚫 50% said no, they can wait until they are 13

  • 25% said yes, more control than what we have now

  • 🤔 25% said they did not know this existed

The wait-until-13 instinct is strong, even when the platform is offering parental controls. It seems the existence of a safety feature does not automatically mean it is the right time.

Given the content this week about the manosphere, it would be really interesting to see how much of this content is making or has already made it’s way into our community.

THE DOWNLOAD

Have you got your copy of The Download? Our free guide to the eight technology decisions parents ask about most, from first phones to Instagram, gaming and AI. If you haven't grabbed it yet, it's here.

NEXT WEEK

The second phase of the New Mexico trial against Meta begins on 4 May — that is where a judge will decide whether Meta must actually change how its platforms work, not just pay damages.

The UK digital wellbeing consultation closes on 26 May. If you have not responded yet, the parent and carer survey is at gov.uk.

Thanks for being here every week. The fact that you're reading this means you're paying attention to the speed with which things are changing and you're doing something a lot of parents haven't done yet. Not because they don't care, but because they maybe don’t know. So please, share this. If you can, forward it to a friend, drop it in your school parent group or text it to the mums you had coffee with last week. The more parents who know what’s happening, the more we can change things and the safer all of our kids are.

Know a parent figuring out phones and social media?

SEND TO A FRIEND

Or send them The Download — our free guide to the eight technology conversations every parent will have. It's the quickest way to get up to speed.


📚 New here?

The Download is our free guide to the eight technology conversations every parent will have — from first phones to Instagram, gaming and AI.

Get The Download →

Stay ahead. Every Thursday.

Were you forwarded this email? Sign up here

Reply

Avatar

or to participate

Keep Reading