Meta has made temporary changes to its conversational AI systems meant to suppress conversations with underage users involving sensitive topics. The company said it will train its chatbots not to talk to teens about these issues in the future, and instead “will guide teens to proven and appropriate support,” though the blog post did not spell out what that support might look like.
A Meta spokeswoman said the steps are interim while the company works on broader, long-term safety-tools for children. The company will also limit which AI “characters” are available to teen accounts, reserving such models solely for those meant for creative or educational interaction, it said.
The change comes after the company faced criticism over an internal policy document that critics said allowed problematic responses to underage users. Meta said the document in question did not match up with its broader rules and had since been changed, but the leak sparked increased public anxiety over child safety and official inquiries.
Regulatory pressure has ramped up since the document emerged: At least one member of Congress has launched an investigation into the company’s AI practices and a multistate coalition of attorneys general has sounded the alarm about potential harm to children and potential legal violations. State officials called for greater protections and clearer restrictions on interactions between AI and minors.
Meta would not say how many of its chatbot users are underage or indicate if it expects a decline in the use of systems powered by A.I. from teens with the new restrictions in place. The company said it plans to keep on fine-tuning the technology, and to layer in protections to avoid age-inappropriate experiences.