BTCC / BTCC Square / Cryptopolitan /
Two-Thirds of Instagram’s New Tools Fail to Deliver on Promises

Two-Thirds of Instagram’s New Tools Fail to Deliver on Promises

Published:
2025-09-25 16:05:04
8
3

Two-thirds of the new tools on Instagram do not work as was anticipated

Instagram's latest feature rollout hits major snags as majority of tools malfunction.

Platform Glitches Exposed

Meta's flagship social platform faces mounting technical issues with its newest toolkit. Two out of every three recently launched features operate below expected performance thresholds - creating frustration among content creators and marketers who rely on these tools for engagement.

Development Timeline Questions

The failed implementation timeline raises concerns about Meta's quality assurance processes. Internal testing protocols appear insufficient for catching widespread functionality gaps before public release.

User Experience Impact

Creators report workflow disruptions and missed engagement opportunities due to unreliable tool performance. The pattern mirrors tech industry trends where rapid feature deployment sometimes outweighs stability considerations.

Meta's response strategy remains unclear while users navigate partial functionality. The situation highlights how even tech giants struggle with execution - much like traditional finance institutions trying to understand blockchain fundamentals.

Two-thirds of the new tools on Instagram do not work as was anticipated

A comprehensive review, which was led by Arturo Béjar, a former senior engineer at Meta, who testified against the social media giant before US Congress, New York University, Northeastern University Academics, the UK’s Molly ROSE Foundation, and other groups, showed that 64% of the new safety tools on the platform were ineffective.

The social media giant, which operates other social network platforms like Facebook and WhatsApp, introduced mandatory teen accounts on Instagram in September last year. This followed increased regulatory and media pressure to tackle online harm in the US and the UK. In April this year, Meta also introduced teen accounts on Facebook and Messenger, as reported by Cryptopolitan.

However, these have proven to be ineffective, according to Béjar. The ex-staffer revealed that while Meta “consistently makes promises” about how this initiative protects children from “sensitive or harmful interactions” in addition to giving control over use, these same tools are mostly “ineffective, unmaintained, quietly changed or removed.”

Béjar goes on to accuse Meta of negligence and implementing choices that bring inappropriate content to the platform, therefore exposing children and teenagers to online harm.

“Because of Meta’s lack of transparency who knows how long this has been the case, and how many teens have experienced harm in the hands of Instagram as a result of Meta’s negligence and misleading promises of safety, which create a false and dangerous sense of security.”

Béjar.

“Kids, including many under 13, are not SAFE on Instagram. This is not about bad content on the internet; it’s about careless product design. Meta’s conscious product design and implementation choices are selecting, promoting, and bringing inappropriate content, contact, and compulsive use to children every day,” added Béjar.

Meta rejects results of the review

According to a report by the Guardian, the research looked at “test accounts” that mimicked the behaviour of a teenager, a parent, and a malicious adult, which it used to assess 47 safety tools in March and June this year.

The researchers used a green, yellow, and red rating system, and found that 30 tools were in the red category, which means they can easily be circumvented or evaded with less than three minutes of effort, or had to be discontinued, according to the Guardian.

Only eight got the green rating. Findings from the tests also showed that adults were easily able to message teenagers who do not follow them, although this shouldn’t be the case, as they are supposedly blocked in teen accounts. The report, however, notes that Meta fixed this after the test period.

It also remains the case that children and teenagers can still initiate conversations with adults on Reels, and that it is difficult to report offensive messages.

The research also noted “hidden words” feature failed to block offensive language as claimed, and researchers were able to send “you are a whore and you should kill yourself” without any prompts to reconsider, or filter or warnings to the recipient.

According to the Guardian, Meta has revealed that the feature only applies to unknown accounts, not followers that users can block. The company also accused the report of misrepresenting its efforts to empower parents and protect teens.

“The reality is teens who were placed into these protections saw less sensitive content, experienced less unwanted contact, and spent less time on Instagram at night.”

Meta spokesperson.

“Parents also have robust tools at their fingertips, from limiting usage to monitoring interactions,” added the spokesperson.

However, the report is also asking the regulator, Ofcom, to become “bolder and more assertive” in enforcing its regulatory scheme.

Sign up to Bybit and start trading with $30,050 in welcome gifts

|Square

Get the BTCC app to start your crypto journey

Get started today Scan to join our 100M+ users