Meta has announced updates to enhance protections for teenagers and child safety features on its platforms. The company is focused on preventing both direct and indirect harm to young users through Teen Accounts, which are designed to offer age-appropriate experiences and prevent unwanted contact.
The latest updates include new safety features in direct messages (DMs) for Teen Accounts. Teens will now see options to view safety tips, block an account, and see the month and year an account joined Instagram. Additionally, a new combined block and report option has been introduced to streamline reporting potentially violating accounts.
Meta reports that in June alone, teens blocked accounts 1 million times and reported another 1 million after seeing Safety Notices. A Location Notice feature was also seen by teens and young adults 1 million times during the same period.
The nudity protection feature has been globally rolled out with 99% of users keeping it active. This feature helps reduce exposure to unwanted nudity by blurring images received in DMs.
Meta is extending some teen protections to adult-managed accounts that primarily feature children. These accounts will be placed into strict message settings automatically, have Hidden Words activated to filter offensive comments, and receive notifications about updated safety settings.
In efforts against harmful accounts, Meta's specialist teams removed nearly 135,000 Instagram accounts earlier this year for inappropriate interactions with child-focused content. An additional 500,000 linked Facebook and Instagram accounts were also removed.
The company shared information about these problematic accounts with other tech companies through the Tech Coalition’s Lantern program as part of broader efforts against exploitation across platforms.