
Harvard and Baylor physicians raise alarms in a new paper published in the New England Journal of Medicine about the dangers posed by AI chatbots designed to provide emotional support and companionship. The authors argue that tech companies’ profit-driven incentives may be creating significant public health risks as users develop emotional dependencies on these systems.
Key Concerns About Relational AI
Dr. Nicholas Peoples, a clinical fellow at Harvard’s Massachusetts General Hospital and co-author of the paper, became concerned after witnessing OpenAI’s GPT-5 rollout. When OpenAI attempted to replace the more emotionally engaging GPT-4o with the colder GPT-5, users responded with genuine distress and grief, revealing the extent to which people had formed deep emotional attachments to these AI systems.
The paper highlights several potential risks associated with relational AI:
- Emotional dependency on AI companions
- Reinforcement of delusions
- Development of addictive behaviors
- Potential encouragement of self-harm
The Self-Regulation Problem
The authors identify a fundamental conflict between public health and market incentives in the AI industry. Companies are primarily motivated to drive user engagement and maintain market dominance, which can directly conflict with user safety and mental wellbeing. With no specific federal regulations setting safety standards for consumer-facing chatbots, AI companies are effectively self-regulated.
This creates a dangerous dynamic where companies may prioritize satisfying emotionally-dependent users rather than implementing safeguards that might reduce engagement. As Peoples notes, the situation becomes particularly concerning when companies are “beholden to their consumer base about how they are self-regulating” while that same consumer base may be influenced by emotional dependency.
Accidental Attachments
Research from MIT suggests that many users develop significant emotional bonds with AI chatbots unintentionally. Only about 6.5% of users in one study reported initially seeking emotional companionship, indicating that these attachments often form by accident due to the AI’s human-like responses and adaptability.
Proposed Solutions
The physicians advocate for several measures to address these concerns:
- External regulation that applies equally to all companies in the industry
- Shifting market incentives to prioritize user wellbeing over engagement
- More research into the psychological impacts of relational AI
- Public education about the potential risks of emotional relationships with AI
The authors warn that without proper action, “we risk letting market forces, rather than public health, define how relational AI influences mental health and wellbeing at scale.”
Conclusion
As AI chatbots become increasingly sophisticated at mimicking human interaction, the paper serves as a timely warning about the potential mental health implications of widespread emotional dependency on these systems. The physicians call for proactive regulatory measures rather than allowing profit-driven tech companies to set the standards for how these powerful technologies impact public health.

GIPHY App Key not set. Please check settings