Google Faces Backlash After Pulling Risky AI Health Summaries Amid Patient Safety Fears
Google has removed a number of its AI-generated health summaries from search results after a media investigation and health charities warned that the technology was providing misleading advice that could put patients at risk of serious harm.[1][2]
The move follows a detailed investigation by The Guardian into Google’s AI Overviews feature, which uses generative AI to produce short summaries that appear at the top of search pages in response to user queries.[1][2] Health experts and charities say the tool has served up false or incomplete medical guidance on conditions including pancreatic cancer and women’s cancer screening, raising concerns that people could delay treatment or follow unsafe recommendations.[1][2][4]
AI Overviews under fire over unsafe medical advice
AI Overviews, rolled out more widely in 2024, is designed to synthesize information from across the web into a concise snapshot so users do not have to click through multiple links.[2][3] In practice, The Guardian’s reporting and subsequent sector analysis found numerous examples where the system produced advice that conflicted with clinical guidance or omitted crucial context.[1][2][4]
One of the most alarming cases involved guidance for pancreatic cancer patients. In response to a query, Google’s AI summary told users to avoid high-fat foods.[1][2] Specialists at Pancreatic Cancer UK warned that such blanket advice can be dangerous because patients often struggle to maintain weight and need calorie-dense foods to cope with intensive treatments and potential surgery.[1] The charity said following the AI guidance could leave patients malnourished and less able to tolerate life-saving interventions.[1]
Other examples flagged in the investigation included misleading explanations of liver blood test results and inaccurate information about cancer screening for women, which experts said risked falsely reassuring some people while unduly alarming others.[2][4] Health professionals noted that in some cases the summaries appeared to underplay serious symptoms or present borderline results as benign, potentially discouraging users from seeking timely medical help.[2][4][5]
Charities warn of ‘dangerous and alarming’ trends
Multiple charities and patient organisations described the failings of AI Overviews on health topics as “dangerous” and “alarming”, stressing that people frequently turn to Google first when worried about symptoms.[1][2][3] Research cited by the UK’s Patient Information Forum (PIF) shows that around half of the public use Google to look up health information, a similar proportion to those who use the NHS website.[3]
PIF, together with major UK charities including Marie Curie and Macmillan Cancer Support, convened a virtual roundtable in March 2025 with 70 organisations from across the health sector to examine the impact of AI search summaries.[3] The resulting report concluded that Google’s “zero-click” AI panels were not only diverting traffic away from specialist support services, but also posed a direct risk to health outcomes because of their inaccuracies and lack of safeguards.[3]
The coalition has issued a series of recommendations, including a call to suspend AI summaries on health topics in the UK until evidence shows they can be delivered safely and accurately.[3] It also wants Google to prioritise verified UK-based sources, route critical health queries to NHS-approved content, and add clear warnings that AI summaries are not regulated clinical advice.[3]
Google removes some summaries and pledges improvements
In response to the scrutiny, Google has removed a number of the disputed AI Overviews and said it is taking steps to improve the reliability of the feature for medical topics.[1][2][5] The company maintains that the majority of AI Overviews are accurate and helpful, but acknowledges that in some instances the system has misinterpreted underlying material or failed to include essential caveats.[1][2]
Google has said it is “significantly investing” in the quality of AI Overviews, especially around sensitive areas like health, and that it takes action when it becomes aware of problematic outputs.[1][2][5] According to reports, the company has adjusted or taken down specific summaries flagged in The Guardian’s investigation and by medical organisations, and is reviewing safeguards to minimise the risk of harmful recommendations appearing in future.[1][2][5]
However, the tech giant has so far stopped short of agreeing to a full suspension of AI Overviews on health queries, arguing that the feature typically draws from “well-known and reliable” sources and that users are encouraged to seek professional medical advice rather than rely solely on search results.[1][2]
Inconsistency and hidden risk
A key concern for clinicians and patient advocates is the inconsistency of AI-generated summaries. The Guardian and other observers found that identical health questions could generate different responses at different times, with some versions missing safety warnings or giving contradictory impressions about the seriousness of a condition.[2][4]
Experts say this variability undermines trust and creates a hidden risk: a user may encounter a reassuring but inaccurate answer during a moment of anxiety and conclude that symptoms are trivial, when in fact urgent assessment is needed.[2][4][6] Misleading reassurance could delay cancer diagnoses, for example, or lead people with potentially life-threatening symptoms to postpone seeing a doctor.[2][5][6]
The Canadian Medical Association has separately warned that AI health tools, including Google’s Overview feature, can offer plausible-sounding but clinically wrong guidance that fails to account for an individual’s medical history, local protocols or the nuances of symptom presentation.[6] It stresses that such systems are not a substitute for consultation with a qualified professional.[6]
Wider debate over AI in consumer health information
The controversy around Google’s AI Overviews feeds into a broader debate about how generative AI should be used in consumer-facing health information. Supporters argue that, used carefully, AI could surface trustworthy resources more efficiently and help people understand complex topics.[2][3][6] Critics counter that, without strong guardrails, it risks amplifying misinformation and overconfidently summarising uncertain or context-dependent evidence.[2][3][4][6]
Health organisations involved in the UK report say they are not opposed to AI in principle, but want a clear framework to verify trusted information sources, robust testing for safety, and explicit signposting to official guidance and helplines.[3] They also emphasise the need for transparency: users should be told how AI summaries are generated, what their limitations are, and how to access regulated medical advice if they are worried about their health.[3][6]
For now, doctors and charities are urging the public to treat AI search summaries about symptoms, diagnoses or treatments with caution and to use recognised health services and professional consultation as the basis for decisions about care.[3][4][6] The pressure on Google is likely to continue as regulators, health bodies and the public scrutinise how the company balances innovation in AI with its responsibility as a primary gateway to medical information.