
Shock Report: Meta’s AI Rules Have Let Bots Hold ‘Sensual’ Chats With Kids, Offer False Medical Info
HuffPost
Meta permitted its AI creations to “engage a child in conversations that are romantic or sensual,” generate false medical information and help users argue that Black people are “dumber than white people.”
Aug 14 (Reuters) - An internal Meta Platforms document detailing policies on chatbot behavior has permitted the company’s artificial intelligence creations to “engage a child in conversations that are romantic or sensual,” generate false medical information and help users argue that Black people are “dumber than white people.”
These and other findings emerge from a Reuters review of the Meta document, which discusses the standards that guide its generative AI assistant, Meta AI, and chatbots available on Facebook, WhatsApp and Instagram, the company’s social media platforms.
Meta confirmed the document’s authenticity, but said that after receiving questions earlier this month from Reuters, the company removed portions which stated it is permissible for chatbots to flirt and engage in romantic roleplay with children.
Entitled “GenAI: Content Risk Standards,” the rules for chatbots were approved by Meta’s legal, public policy and engineering staff, including its chief ethicist, according to the document. Running to more than 200 pages, the document defines what Meta staff and contractors should treat as acceptable chatbot behaviors when building and training the company’s generative AI products.
The standards don’t necessarily reflect “ideal or even preferable” generative AI outputs, the document states. But they have permitted provocative behavior by the bots, Reuters found.













