Meta Strengthens Teen Safety on Instagram & Facebook: Removes Over 6 Lakh Predatory Accounts

In a bold move toward creating a safer digital space, Meta—the tech giant behind Instagram and Facebook—has rolled out powerful new teen safety features. Alongside this initiative, Meta has removed over 6 lakh accounts that were found engaging in inappropriate interactions with minors. This action is part of a broader commitment to protecting young users amid rising legal, societal, and parental concerns.

In this blog, we’ll explore:

  • What new safety tools Meta has introduced
  • How AI is being used to protect minors
  • The scale of account removals
  • The legal challenges Meta is facing
  • What this means for the future of online safety

Why Meta’s Teen Safety Update Matters

Today’s teens spend significant time on platforms like Instagram and Facebook. However, these platforms also attract bad actors who exploit the open nature of social networks. Meta’s latest actions reflect a serious shift in prioritizing safety—especially for underage and teenage users.

Key Safety Features Introduced by Meta

Meta’s safety overhaul includes multiple changes to DM settings, privacy defaults, and age verification processes. Let’s break these down:

1. Direct Messaging Restrictions

Meta is tightening how teenagers interact with others via direct messages (DMs):

  • Teen users will not receive messages from people they don’t follow.
  • If a stranger messages a teen, they’ll receive a pop-up alert suggesting caution and giving the option to block or report the sender with one tap.
  • Meta now shows more information about the sender, helping teens make informed decisions.

Why it matters: It minimizes unsolicited and potentially harmful messages that teens may receive from adult users.


2. Privacy Settings for Teen Profiles

  • All new teen accounts are automatically set to private by default, limiting who can see their content and interact with them.
  • Existing accounts detected as underage are converted to teen accounts with restricted visibility and interaction settings.

Why it matters: This limits exposure to strangers and potential predators, giving teens more control over who views and engages with their content.


3. AI-Based Age Verification

Children below 13 often bypass age restrictions by entering fake birth dates. To combat this, Meta is:

  • Using AI and machine learning to analyze user behavior and detect age misrepresentation.
  • If flagged, the user’s account is adjusted to a teen profile, or in some cases, suspended or removed.

Why it matters: This step addresses underage sign-ups, a major loophole in social media safety protocols.


Meta Removes Over 6 Lakh Predatory Accounts

In a sweeping action:

  • 1.35 lakh accounts were removed for posting sexualized or inappropriate comments on content shared by children under 13.
  • 5 lakh adult accounts were removed for engaging in inappropriate interactions with minors.

This is one of Meta’s largest safety crackdowns to date and showcases the scale of inappropriate activity that exists on social platforms.

Impact of the Safety Tools (With Stats)

Meta’s blog post revealed that:

  • Teens used the new tools to block over 1 million accounts.
  • An equal number of reports were submitted via safety alerts.

This suggests that the in-app safety prompts and blocking options are actively helping teens manage their digital interactions better.


Meta Faces Legal Pressure Over Teen Mental Health

These safety upgrades come as Meta faces multiple lawsuits filed by U.S. states. The key allegations include:

  • Designing addictive features targeted at young users
  • Promoting endless scrolling via algorithm-driven content
  • Failing to safeguard teens from harmful, distressing, or toxic content

Lawmakers and parents argue that Meta put profits before people, and demand stronger accountability.


What’s Next for Meta and Online Safety?

While this announcement is a step in the right direction, experts say more needs to be done:

  • Greater transparency in algorithm design
  • External audits of safety tools
  • Collaboration with child psychologists and safety organizations
  • Continued content moderation using human + AI teams

Meta’s ability to implement and enforce these tools consistently will determine whether this becomes a true turning point in digital child protection.

Leave a Reply

Your email address will not be published. Required fields are marked *