
Let’s talk about something important—especially if you’re a parent, a family content creator, or someone who posts their little ones online.
Meta just rolled out a bunch of new safety updates for Instagram accounts that are either run by adults featuring children or representing kids, and it’s about time. These updates aim to make Instagram safer for kids by shielding them from inappropriate messages and creepy interactions that have no business being in a child’s digital space.
Here’s the gist: If you run an Instagram account that primarily showcases children (think mom bloggers, family vloggers, or kidfluencer pages), your account will now be automatically placed under the app’s strictest message settings. That means you’ll be protected from unsolicited DMs, creepy comments, and offensive words, thanks to features like Hidden Words being turned on by default.
Meta is also putting a spotlight on stopping suspicious accounts—especially adults who’ve been blocked by teens—from even finding or interacting with these child-focused accounts. These people won’t be recommended to you, and they’ll be harder to locate in search.
Why now? Because studies, like one from The New York Times, have shown just how vulnerable kids can be when their lives are shared online. In an analysis of 5,000 family accounts, there were over 32 million connections to male followers. That’s…a problem.
Meta isn’t stopping there. They’re also introducing new tools for teens: clearer visibility into who they’re chatting with, built-in tips on staying safe, and a new feature that lets you block and report someone at once.
Just last month alone, teens blocked or reported over 2 million accounts after seeing safety warnings.
Whether you post family moments or just follow accounts that do, this update is a step in the right direction. Because kids deserve the internet’s protection—not its exploitation.