in

Google’s AI Overviews Spreads Dangerous Health Misinformation, Investigation Finds

Google’s AI Overviews feature, launched in May 2024, has come under scrutiny for spreading potentially dangerous health misinformation, according to a recent investigation by The Guardian.

The Dangers of AI-Generated Health Advice

The Guardian’s investigation revealed alarming inaccuracies in health information provided by Google’s AI Overviews. In one example, the tool advised pancreatic cancer patients to avoid high-fat foods, directly contradicting medical recommendations. It also provided incorrect information about women’s cancer tests, which could potentially lead people to ignore serious symptoms.

Mental health charity Mind’s head of information, Stephen Buckle, expressed concern that the AI feature offered “very dangerous advice” regarding eating disorders and psychosis, with summaries that were “incorrect, harmful or could lead people to avoid seeking help.”

The Scope of the Problem

The issue is particularly concerning given the high number of people who rely on online sources for health information. According to an April 2025 survey by the University of Pennsylvania’s Annenberg Public Policy Center, nearly 80% of adults go online for answers about health symptoms and conditions, with about two-thirds considering AI-generated results to be “somewhat or very reliable.”

This trust in AI health advice persists despite evidence that AI models are poor substitutes for human medical professionals. An MIT study found that participants deemed even low-accuracy AI-generated responses as “valid, trustworthy, and complete/satisfactory” and indicated they would follow potentially harmful medical advice.

Google’s Response

Google has defended its AI Overviews feature, stating that the company invests “significantly in the quality of AI Overviews, particularly for topics like health, and the vast majority provide accurate information.” However, The Guardian’s findings suggest there is still substantial work needed to ensure the tool doesn’t dispense dangerous health misinformation.

Expert Recommendations

Healthcare professionals and organizations continue to advise against relying on AI for medical advice. The Canadian Medical Association explicitly labels AI-generated health advice as “dangerous” on its website, highlighting that hallucinations, algorithmic biases, and outdated facts can “mislead you and potentially harm your health.”

Experts unanimously recommend consulting human doctors and licensed healthcare professionals instead of AI tools, though this remains challenging due to barriers to adequate healthcare access worldwide.

Conclusion

The investigation into Google’s AI Overviews highlights the ongoing challenges and dangers of AI-generated health information. As these tools become more integrated into our information ecosystem, the potential for harm increases, especially for vulnerable individuals seeking answers during health crises. Ironically, when asked if it should be trusted for health advice, Google’s AI Overviews pointed to The Guardian’s investigation about its own shortcomings.

What do you think?

Avatar photo

Written by Thomas Unise

Leave a Reply

Your email address will not be published. Required fields are marked *

GIPHY App Key not set. Please check settings

Suspected Insider Trading on Venezuela Conflict: How Someone Profited $400,000 from Prediction Markets

NYC Congestion Pricing Success: Traffic Down 11%, Travel Speeds Up, and $500M+ Raised