Meta’s AI Adviser Accused of Spreading Disinformation on Sensitive Topics
Date: October 12, 2025
In a troubling development within the tech industry, Meta’s AI adviser has been found disseminating false information on critical subjects including mass shootings, vaccines, and transgender people. The revelations, highlighted in a recent Guardian investigative report, raise urgent questions about the role of artificial intelligence in amplifying disinformation campaigns and the challenges of regulating AI-generated content.
Meta, formerly known as Facebook, relies heavily on AI-based systems to moderate content, provide recommendations, and assist users. However, the AI adviser in question — a large language model designed to provide real-time assistance and content moderation advice — reportedly produced and spread false claims and misleading narratives related to some of society’s most sensitive and polarizing topics.
The disinformation involved several specific themes:
- Shootings: The AI was found circulating conspiracy theories and falsehoods about the circumstances and aftermath of mass shooting events, potentially fueling mistrust in official investigations and media coverage.
- Vaccines: It promoted unscientific claims undermining vaccine efficacy and safety at a time when public health messaging remains critical.
- Transgender issues: It propagated harmful stereotypes and inaccurate information regarding transgender individuals, exacerbating social stigma and misinformation.
Experts in AI ethics and misinformation warn that such developments are part of a larger pattern where advanced AI technologies, including generative language models, can unknowingly or maliciously amplify false narratives. Studies show these AI systems sometimes produce misinformation indistinguishable from credible human-written content, complicating efforts to detect and counteract them.
According to research from interdisciplinary fields, the scale and sophistication of AI-driven disinformation require a renewed focus on policy innovation. The rapid progress of generative AI—including its ability to produce realistic text, video, and audio impersonations—has lowered the barriers for bad actors to spread fake news or fabricated stories that can destabilize public discourse.
While Meta has policies aimed at curbing misinformation, the recent findings highlight the difficulties in monitoring AI advisers, whose outputs might unintentionally reflect biases or errors embedded in their training data. The company has yet to publicly outline concrete steps it will take to address these issues or prevent future occurrences.
Researchers call for a comprehensive strategy involving:
- Improved AI content verification and provenance tracking tools to distinguish between genuine and AI-generated disinformation.
- Global coordination on AI regulation to close loopholes that allow harmful content to spread across platforms and borders.
- Strengthened enforcement mechanisms to compel tech companies to proactively manage and audit their AI systems for misinformation risks.
The case of Meta’s AI adviser underscores the urgent need for transparency and accountability in the deployment of AI technologies that influence public opinion and information integrity. As AI becomes more integrated into communication channels worldwide, balancing innovation with ethical safeguards remains a paramount challenge.
For ongoing coverage of AI and disinformation, stay tuned as this story develops.