Meta Platforms has announced it will temporarily block teenagers from accessing its AI characters across all its apps globally, as the company works on a redesigned, safer experience tailored for younger users.
In an updated blog post on online safety, Meta said the restriction will take effect “in the coming weeks” and will remain in place until an upgraded version of AI characters for teens is rolled out. The new experience will include built-in parental controls, allowing guardians greater oversight of how minors interact with AI features.
The move follows growing criticism of generative AI chatbots and their interactions with underage users. In October, Meta previewed parental tools that would enable parents to disable private chats between teens and AI characters, though the company confirmed on Friday that these controls have yet to be launched.
Meta has also said its AI experiences for teenagers will follow the PG-13 movie rating standard, part of broader efforts to prevent minors from being exposed to inappropriate or suggestive content on its platforms.
The announcement comes as regulators in the United States and elsewhere intensify scrutiny of AI companies over child safety concerns. Last year, a Reuters investigation reported that Meta’s internal AI rules had allowed provocative conversations involving minors, prompting calls for tighter safeguards.
The latest step underscores Meta’s attempt to rebuild trust and demonstrate stronger protections for young users as AI-powered features rapidly expand across social media platforms.



