Two-Thirds of Instagram’s New Safety Tools Fail Miserably in 2025 – Here’s Why
- Instagram's Safety Tools Are Failing Teens – The Shocking Numbers
- How Meta's "Protections" Actually Work (Or Don't)
- Meta's Defense: "We're Trying, Okay?"
- The Bigger Picture: Regulation Looms
- What Parents Can Do Right Now
- FAQ: Your Instagram Safety Questions Answered
A bombshell 2025 review reveals that 64% of Meta's much-touted Instagram safety tools for teens are ineffective, with adults easily bypassing protections. Former Meta engineer Arturo Béjar exposes systemic failures, while the company defends its efforts. We break down the findings, the fallout, and what it means for parents.
Instagram's Safety Tools Are Failing Teens – The Shocking Numbers
Remember when Meta promised to make Instagram safer for kids? Yeah, about that... A comprehensive 2025 study led by Arturo Béjar, Meta's former engineering lead-turned-whistleblower, found that 30 out of 47 safety features (that's 64% for you math fans) either didn't work or could be bypassed faster than you can say "algorithmic failure." The research team – including academics from NYU, Northeastern, and the UK's Molly ROSE Foundation – created test accounts mimicking teens, parents, and predators to put these tools through their paces. Spoiler alert: they failed spectacularly.
How Meta's "Protections" Actually Work (Or Don't)
Here's where things get scary: adults could message teens who didn't follow them (supposedly blocked in teen accounts), offensive language slipped through "hidden words" filters (researchers sent "you're a whore, kill yourself" with zero warnings), and most tools were either "unmaintained, quietly changed or removed" according to Béjar. "It's not about bad content online," he told us, "it's about careless product design that actively pushes harmful content to kids." Ouch.
Meta's Defense: "We're Trying, Okay?"
In typical corporate fashion, Meta fired back: "Teens in these protections saw less sensitive content and unwanted contact," their spokesperson claimed, adding that parents have "robust tools" (which the study found largely ineffective). They did fix the messaging loophole... after researchers exposed it. Classic case of closing the barn door after the horse has bolted, if you ask me.
The Bigger Picture: Regulation Looms
With UK regulator Ofcom facing calls to get "bolder" in enforcement, and US lawmakers already grilling Meta, 2025 could be the year the social media safety rubber meets the road. As one parent in our circles put it: "I feel like I need a computer science degree just to keep my kid SAFE online these days." Can't argue with that.
What Parents Can Do Right Now
While we wait for Meta to get its act together:
- Manually adjust privacy settings (don't rely on default "protections")
- Have open conversations about online risks
- Regularly check in on your teen's DMs and Reels interactions
FAQ: Your Instagram Safety Questions Answered
How many of Instagram's safety tools actually work?
Only 8 out of 47 tested tools received a "green" rating in the 2025 study, meaning 64% were ineffective or easily bypassed.
Can adults still message teens on Instagram?
While Meta fixed the most egregious loophole after the study, adults can still interact with teens via Reels comments, and reporting offensive messages remains difficult.
What's the most shocking finding from the research?
That Instagram's "hidden words" feature – meant to block slurs and abuse – failed to catch extreme harassment like "you should kill yourself" when sent by followers.