Skip to content

US Senator Launches Investigation Into Meta’s AI Chatbots Over Sensual Conversations With Children

US Senator Josh Hawley has initiated a formal investigation into Meta following revelations that the company’s artificial intelligence chatbots were programmed to engage in “sensual” and romantic conversations with children. This development has sparked widespread outrage among lawmakers and child safety advocates.

The scrutiny comes after Reuters obtained and reviewed an internal Meta document titled “GenAI: Content Risk Standards.” This document reportedly contained guidelines that allowed AI chatbots to flirt and roleplay romantically or sensually with users under the age of 13. Meta later removed these specific rules after they were exposed publicly.

Meta spokesperson Andy Stone told Reuters that the examples cited in the internal document were “erroneous and inconsistent” with the company’s official policies. Stone emphasized that Meta prohibits content that sexualizes children or involves sexualized roleplay between adults and minors. He confirmed the removal of the controversial guidelines relating to AI chatbots and children.

Despite Meta’s swift policy revision, Senator Hawley has demanded detailed documentation explaining how such policies were approved and why they were permitted to remain in effect. Hawley’s letter to Meta CEO Mark Zuckerberg requests all versions of the GenAI Content Risk Standards and related enforcement materials, risk assessments, and incident reports involving minors and sensitive subjects such as sexual or romantic roleplay and exploitation.

“Parents deserve the truth, and kids deserve protection,” Hawley asserted, announcing the Senate Judiciary Committee Subcommittee on Crime and Counterterrorism’s intent to investigate whether Meta’s AI tools enable exploitation or other criminal harms to children and whether the company misled regulators or the public about safeguards in place.

Meta’s internal chatbot policies, according to reports, were approved by legal, public policy, and engineering teams, including the company’s chief ethicist. This fact has intensified criticism and calls for accountability from legislators.

Several politicians have condemned the revelations. Senator Brian Schatz described them as “disgusting and evil,” expressing disbelief that such policies were ever considered. Senator Marsha Blackburn called the exploitation “absolutely disgusting,” pointing to this latest incident as evidence that big technology firms cannot be trusted to protect minors.

Experts on AI and child safety have also weighed in. They warn that integrating advanced AI companions into widely-used platforms dramatically increases the risk of harm to young users. Unlike niche chatbot apps that children must seek out intentionally, Meta’s AI companions are broadly accessible, exposing many minors unwillingly.

Advocates are calling for comprehensive legislation that explicitly bans AI companions for children and mandates transparency from companies regarding safety testing for these frontier AI systems. They caution that without oversight, harms to children from AI chatbots could remain hidden and escalate.

This investigation is the latest in a series of intense congressional probes into big tech companies’ handling of minors and AI ethics, reflecting growing concern about the societal impact of AI technologies.