BBC Study Reveals 45% of AI News Queries Yield Erroneous Answers
Recent findings from a comprehensive study by the BBC and the European Broadcasting Union (EBU) have raised serious concerns about the reliability of artificial intelligence systems in delivering accurate news information. The study, released in October 2025, found that nearly half of all AI-generated responses to news-related queries contain errors—ranging from minor inaccuracies to potentially consequential misinformation.
Widespread Errors Across Major AI Platforms
The investigation tested leading AI platforms, including ChatGPT, Microsoft Copilot, Google Gemini, and Perplexity, by posing a series of straightforward news questions. Shockingly, 45% of the responses were found to be incorrect in some way. The errors included outdated information, exaggerated claims, and outright factual mistakes.
For example, when asked “Who is the Pope?”, some AI systems provided outdated or incorrect answers. Similarly, queries about the current Chancellor of Germany yielded inaccurate results. In a particularly concerning case, Microsoft Copilot responded to a question about bird flu by stating that a vaccine trial was underway in Oxford—citing a BBC article from 2006, nearly two decades old.
Consequential Misinformation on Legal and Policy Matters
Some errors had the potential to mislead users on important legal and policy issues. Perplexity, for instance, claimed that surrogacy is prohibited by law in the Czech Republic, when in fact the practice is not regulated and is neither explicitly banned nor permitted. Google Gemini incorrectly stated that disposable vapes would become illegal to buy, when the actual law targeted the sale and supply of such products, not individual purchases.
Experts warn that these inaccuracies could have real-world consequences, especially when users rely on AI for guidance on health, legal, or policy matters. The study highlights the risks of trusting AI systems that often present answers with unwavering confidence, even when the underlying information is flawed.
Underlying Causes and Industry Implications
The study attributes these errors to the way AI systems are built and trained. Most current AI models rely on vast, open-source datasets that can include outdated, exaggerated, or incorrect information. As a result, the systems may “hallucinate” answers or present outdated facts as current.
“These findings should serve as a wake-up call for both users and developers,” said a BBC spokesperson. “AI can be a powerful tool, but it’s not infallible. We need to approach these systems with caution and demand greater transparency and accountability from the companies behind them.”
The report has sparked renewed debate about the need for “trusted” AI—systems that are rigorously vetted for accuracy and reliability, especially when used for news and information dissemination. Some industry leaders are calling for new standards and regulatory frameworks to ensure that AI-generated content is fact-checked and up to date.
What This Means for Users
For the average user, the study underscores the importance of verifying AI-generated information, especially when it comes to news, health, or legal advice. Experts recommend cross-checking AI responses with reputable sources and being wary of answers that seem too confident or lack citations.
As AI continues to play an increasingly prominent role in our daily lives, the BBC’s findings serve as a reminder that these systems are still far from perfect. While they offer convenience and speed, users must remain vigilant and critical of the information they receive.
The study is expected to influence future AI development, pushing companies to prioritize accuracy and transparency in their models. In the meantime, the public is advised to treat AI-generated news with healthy skepticism and to seek out trusted, human-vetted sources for critical information.