Parents worried about what their teen is discussing with Meta‘s AI Assistant will now be able to view topics of conversation through a Teen Account parental supervision tool. Meta announced the feature Thursday in a blog post.

The information will be available via an Insights tab in the supervision tool for the platforms Instagram, Facebook, and Messenger, all of which are owned by Meta.

The feature lists broader topics, such as school, entertainment, writing, health, and wellbeing. Parents can click on the topic for additional but limited detail.

The health and wellbeing categories, for example, can include fitness, physical health, and mental health. The information only covers the past seven days of exchanges.

The feature is the latest safety measure Meta has implemented under intense legal and media scrutiny.

Meta recently lost two separate landmark trials related to child safety protections and the allegedly addictive design of its products. The company said it will appeal both verdicts.

The child safety lawsuit, which took place this year in New Mexico, yielded internal Meta documents demonstrating that the company’s leadership knew its persona-driven AI companions, or “characters,” could engage in inappropriate and sexual interactions and still launched them without stronger controls.

Last August, Meta locked down its AI characters for teen users amid reports that they were inappropriately engaging with minors, including in discussions about self-harm, suicide, and romantic interactions. In October, the company provided parents with the ability to turn off one-to-one AI character conversations and block specific characters. In January, though, Meta again restricted teen access to characters as its AI assistant remained available.

A Meta spokesperson confirmed to Mashable that AI characters are paused for teens globally as the company continues to build parental controls.

In addition to the latest parental supervision feature, Meta partnered with the Cyberbullying Research Center to create a list of “conversation starters” about AI chatbot use.

The company also announced the formation of a new AI Wellbeing Expert Council assembled to provide “ongoing input” on AI teen experiences. Meta said the experts are affiliated with the National Council for Suicide Prevention, the University of Michigan, and Northeastern University, among other institutions.

Josh Golin, executive director of the children’s advocacy nonprofit Fairplay, said in a statement that Meta’s newest supervision feature “once again” burdens parents with monitoring their child’s online activity in lieu of “building a safe product to begin with.”

Last fall, Fairplay published a report on independent safety testing of Meta’s Teen Accounts. Fairplay said the findings indicate that Meta’s safety measures don’t work as advertised.

The latest feature, Golin said, “doesn’t address the fundamental problem: The main function of Meta’s chatbots is to manipulate young people into spending more time on the platform by encouraging teens to form unhealthy emotional connections to bots.”

Additional reporting by Chase DiBenedetto.

UPDATE: Apr. 23, 2026, 9:24 a.m. PDT This story has been updated with a statement from Fairplay.



Source link


author

Leave a Reply

Your email address will not be published. Required fields are marked *