Skip to content

Google Pulls Risky AI Health Summaries Amid Warnings Of Harm To Patients

Google Pulls Risky AI Health Summaries Amid Warnings of Harm to Patients

Google AI Overviews interface showing health summary

Google has removed several AI-generated health summaries from its search results following a damning investigation by The Guardian that exposed misleading medical advice potentially endangering users’ lives.[1][3]

The AI Overviews feature, which provides concise summaries at the top of search results, has come under fire for delivering false information on critical health topics. Experts warn that such errors could lead patients to ignore symptoms, delay vital treatments, or adopt harmful practices.[1][2]

Misleading Advice on Cancer and Liver Tests

The Guardian’s probe uncovered specific instances of dangerous inaccuracies. For pancreatic cancer patients, one AI summary recommended avoiding high-fat foods—a suggestion experts say could exacerbate malnutrition and increase mortality risk.[3]

Queries about liver function tests, such as “what is the normal range for liver blood tests?” received responses laden with raw numbers for enzymes like alanine transaminase (ALT), aspartate aminotransferase (AST), and alkaline phosphatase (ALP), but without context for factors like age, sex, ethnicity, or nationality.[3]

Health professionals highlighted how these oversimplifications give false reassurance, potentially deterring people from seeking professional care during anxious moments.[2]

“Misleading medical information in Google AI summaries could delay diagnosis and treatment,” experts told investigators.[1]

Inconsistencies and ‘Hallucinations’

Further concerns arose from the AI’s inconsistency: identical health queries yielded varying answers over time, eroding trust.[1] This variability stems from AI ‘hallucinations,’ where models invent facts when data is lacking.[3]

Charities including Marie Curie, Macmillan, and the Patient Information Forum (PIF) have amplified these alarms. A report from a March 2025 roundtable of 70 health organizations called for suspending AI summaries on UK health topics until safety is assured.[4]

They demand prioritization of verified UK sources, explicit warnings on AI limitations, and mandatory referrals to healthcare professionals.[4]

Google’s Response and Broader Implications

In response, Google confirmed it removed AI Overviews for flagged health queries and is enhancing quality controls, especially for medical content.[2][3] The company maintains that most summaries are accurate and helpful, with ongoing improvements.[1]

However, worries persist over remaining summaries on cancer and mental health, which may still pose risks.[2] This incident underscores growing scrutiny of AI in high-stakes domains like healthcare, where half of people turn to Google for health info—rivaling NHS website usage.[4]

UK health groups propose a verification framework and better signposting to trusted sources, endorsed by 50 organizations.[4]

Key Risks Identified in AI Health Summaries
Risk Examples Potential Impact
Inaccurate Advice Pancreatic cancer diet; liver test ranges Delayed treatment, harm
Lack of Context Ignores age, sex, ethnicity False reassurance
Inconsistency Differing answers over time Eroded trust

Expert Calls for Oversight

“AI tools must direct users to reliable sources and advise seeking expert input,” professionals urged.[2] The European tech landscape echoes these sentiments, with outlets like Euronews and Euractiv reporting on the removals amid expert warnings of ‘growing dangers.'[3][8]

This follows Google’s 2024 rollout of AI Overviews, which reduced traffic to health charity sites via ‘zero-click’ answers—a trend now compounded by safety fears.[4]

As AI integrates deeper into daily searches, regulators and tech giants face pressure to balance innovation with public safety. Health leaders stress that while AI holds promise, unverified summaries risk undermining trust in digital health resources.[5]

Google’s actions signal responsiveness, but experts insist comprehensive oversight is essential to prevent future mishaps.[7]

Health Sector Demands

  • Suspend AI health summaries until resolved.
  • Verify trusted UK sources.
  • Add warnings and professional referrals.

The saga highlights the perils of deploying generative AI without robust safeguards in life-critical areas. As investigations continue, users are advised to consult doctors over search engines for medical concerns.

(Word count: 1028)

Table of Contents