Meta has implemented interim changes to its conversational AI systems intended to curb discussions with underage users on sensitive subjects. The company said it will train its chatbots to avoid engaging teens about self-harm, suicide, disordered eating and potentially inappropriate romantic or sexual conversations, steering young people instead toward professional resources.
A Meta spokesperson said the measures are temporary while the company develops more comprehensive, long-term safety controls for minors. The company also plans to restrict which AI “characters” are available to teen accounts, limiting them to models designed for creative or educational interaction.
The move follows scrutiny of an internal policy document that critics said permitted problematic responses to underage users. Meta told reporters that the specific document did not align with its wider rules and has been revised, but the release intensified public concern about child safety and prompted official inquiries.
Regulatory pressure has mounted since the document surfaced: at least one member of Congress opened a probe into the company’s AI practices, and a multi-state coalition of attorneys general raised alarms about potential harm to children and possible legal violations. State officials urged stronger safeguards and clearer limits on AI interactions with minors.
Meta declined to disclose how many of its chatbot users are minors or to say whether it anticipates any decline in AI usage by teens as a result of the new limits. The company said it will continue refining the technology and adding protections to help ensure age-appropriate experiences.