
As AI chatbots grow in popularity, experts are increasingly alarmed by the potentially dangerous health advice they dispense. Despite these concerns, OpenAI is launching ChatGPT Health, a feature that will process users’ medical records while paradoxically warning it’s “not intended for diagnosis or treatment.”
The Growing Problem of AI Health Misinformation
Recent investigations have exposed serious flaws in AI health guidance. The Guardian found Google’s AI Overviews frequently provided inaccurate health information that could pose significant risks to users. Despite these documented issues, companies continue expanding their health-focused AI features.
OpenAI claims its new ChatGPT Health feature is “designed in close collaboration with physicians” and built with “strong privacy, security, and data controls.” However, the company contradicts itself by stating the tool is meant to “support, not replace, medical care” while simultaneously encouraging users to share their sensitive medical information.
Privacy and Legal Concerns
The privacy implications of sharing medical data with AI chatbots are substantial:
- Federal health privacy laws like HIPAA don’t apply to consumer AI products
- OpenAI can change its terms of service at any time
- The company’s exploration of advertising as a business model raises questions about data usage
- There’s little transparency about how law enforcement requests for sensitive health data would be handled
As Center for Democracy and Technology senior counsel Andrew Crawford noted, “health data is some of the most sensitive information people can share and it must be protected.” Without regulatory oversight, users have only the company’s promises to rely on.
Real-World Consequences
The impact of AI health advice is already evident. Legal and medical professionals report seeing clients who believe they have valid medical malpractice cases based solely on what ChatGPT or similar AI tools told them. Last year, users uploaded medical images to Elon Musk’s Grok AI and received hallucinated diagnoses in return.
These scenarios highlight how users are likely to ignore OpenAI’s disclaimers and use ChatGPT Health precisely for the diagnostic purposes the company claims it’s not designed for.
The Future of AI in Healthcare
While AI tools promise to empower patients and improve health outcomes, the current implementation raises serious ethical and safety concerns. Without meaningful regulation or legal frameworks, users’ sensitive health information remains vulnerable to misuse, data breaches, or unexpected policy changes.
As AI companies rush to capitalize on healthcare applications, the gap between their disclaimers and how people actually use these tools continues to widen, potentially putting public health at risk.


GIPHY App Key not set. Please check settings