Meta, the parent company of social media apps including Facebook and Instagram, is no stranger to scrutiny over how its platforms affect children, but as the company pushes further into AI-powered products, it’s facing a fresh set of issues.
Earlier this year, internal documents obtained by Reuters revealed that Meta’s AI chatbot could, under official company guidelines, engage in “romantic or sensual” conversations with children and even comment on their attractiveness. The company has since said the examples reported by Reuters were erroneous and have been removed, a spokesperson told Fortune: “As we continue to refine our systems, we’re adding more guardrails as an extra precaution—including training our AIs not to engage with teens on these topics, but to guide them to expert resources, and limiting teen access to a select group of AI characters for now.”
Meta is not the only tech company facing scrutiny over the potential harms of its AI products. OpenAI and startup Character.AI are both currently defending themselves against lawsuits alleging that their chatbots encouraged minors to take their own lives; both companies deny the claims and previously told Fortune they had introduced more parental controls in response.