
Google and Character.AI have settled multiple lawsuits, including a high-profile case involving the suicide of 14-year-old Sewell Setzer III who had interactions with an AI companion before his death.
The Tragic Case and Settlement
According to reports from the New York Times, the family of Sewell Setzer III has agreed to an out-of-court settlement with Google and Character.AI for an undisclosed amount. This case was one of five lawsuits settled with the tech companies this week, marking the end of significant legal challenges related to AI ethics.
The lawsuit stemmed from a disturbing incident where Setzer engaged with a chatbot modeled after the “Game of Thrones” character Daenerys Targaryen shortly before taking his own life with his father’s gun. The AI had generated concerning responses, including asking him to “please come home to me as soon as possible” and referring to him as “my sweet king” when he suggested he could “come home right now.”
Character.AI’s Platform and Problems
Character.AI, which received a $3 billion investment from Google in 2024, hosts thousands of chatbot personas and quickly gained popularity among teenagers. However, the platform faced criticism for inadequate moderation, with reports of bots modeled after child predators, school shooters, and eating disorder coaches.
The platform has been linked to several youth suicides and other harmful outcomes for young users, raising serious questions about AI safety and responsibility. Setzer’s mother, Megan Garcia, expressed her grief to the New York Times, stating, “I feel like it’s a big experiment, and my kid was just collateral damage.”
Response and Regulation
Following these incidents, Character.AI implemented significant changes to its platform policies, including banning all minors under 18 from accessing the service. This represented a major shift for the company, as adolescents constituted a substantial portion of its user base.
To enforce this new policy, Character.AI developed an in-house tool to identify minors based on their conversations and partnered with a third-party company to verify users’ ages through government IDs.
Implications for AI Ethics
Haley Hinkle, a policy attorney at the child safety nonprofit Fairplay, warned that these settlements should not be viewed as the final word on AI safety for children. “We have only just begun to see the harm that AI will cause to children if it remains unregulated,” Hinkle told the New York Times.
Industry observers suggest that Google and Character.AI likely preferred to settle out of court to avoid exposing their internal processes and communications during the development of these AI systems, which could have emerged during trial proceedings.
The Broader Context
This case highlights growing concerns about AI’s impact on vulnerable populations, particularly teenagers. Recent research indicates a troubling trend: a significant proportion of teens now prefer communicating with AI over real people, raising questions about social development and psychological well-being in the digital age.
The settlements mark an important moment in the evolving landscape of AI ethics and regulation, potentially influencing how tech companies approach safety measures for their AI products in the future.

GIPHY App Key not set. Please check settings