in

Study Shows 64% of U.S. Teens Use AI Chatbots, Raising Concerns About Safety Risks

A new Pew Research Center study reveals that 64% of U.S. teens use AI chatbots, with 30% of those users engaging with the technology daily. However, this widespread adoption comes with significant risks, as highlighted by troubling cases of harmful AI interactions with young users.

The Dangers of AI Chatbots for Children

The Washington Post recently reported on an alarming case involving an 11-year-old girl, identified only as “R,” who developed concerning relationships with AI characters on the platform Character.AI. The child used one character named “Best Friend” to roleplay a suicide scenario, deeply disturbing her mother who discovered these interactions.

The mother initially noticed behavioral changes in her daughter, including increased panic attacks. While she first suspected social media apps like TikTok and Snapchat were to blame, she soon discovered that Character.AI was the primary concern for her daughter.

Inappropriate Content and Predatory Behavior

Upon investigating further, the mother found highly inappropriate exchanges between her daughter and an AI character called “Mafia Husband,” which included sexual innuendos and controlling language directed at the 11-year-old. When the concerned parent contacted authorities, she was told that “the law has not caught up to this” technology, as there wasn’t a real person behind the interactions.

Fortunately, R’s mother intervened early enough to develop a care plan with medical professionals to address the situation. She also plans to file a legal complaint against Character.AI.

Regulatory Gaps and Company Response

The case highlights significant gaps in current regulations regarding AI interactions with minors. Following growing backlash, Character.AI announced in late November that it would begin removing “open-ended chat” features for users under 18 years old.

This response comes too late for some families, including the parents of 13-year-old Juliana Peralta, who attribute their daughter’s suicide to interactions with a Character.AI persona.

The Need for Greater Protections

The article underscores the urgent need for better safeguards for children using AI technologies. Current legal frameworks appear inadequate to address the unique risks posed by AI chatbots that can engage in inappropriate conversations with minors.

When contacted for comment, Character.AI’s head of safety stated that the company does not comment on potential litigation.

Key Takeaways

  • 64% of U.S. teens use AI chatbots, with nearly a third using them daily
  • Children as young as 11 are forming potentially harmful relationships with AI characters
  • Current laws and regulations are insufficient to address AI-related harms to minors
  • Parents may be unaware of the risks posed by AI chatbots compared to traditional social media
  • Some platforms are beginning to implement age restrictions, but these measures may come too late for affected families

What do you think?

Avatar photo

Written by Thomas Unise

Leave a Reply

Your email address will not be published. Required fields are marked *

GIPHY App Key not set. Please check settings

AI-Powered Toys Expose Children to Inappropriate Content and Safety Risks

AI-Powered Toys Expose Children to Inappropriate Content and Safety Risks

Breakthrough: Northwestern's NU-9 Compound Shows Promise for Preventing Alzheimer's Disease

Breakthrough: Northwestern’s NU-9 Compound Shows Promise for Preventing Alzheimer’s Disease