
Security researchers Joseph Thacker and Joel Margolis recently discovered a significant data security flaw in Bondu, an AI-powered stuffed dinosaur toy designed for children. Their investigation revealed that anyone with a Gmail account could access private conversations between children and their toys, exposing sensitive personal information of young users.
The Alarming Discovery
Without employing any hacking techniques, the researchers gained access to Bondu’s web-based portal, which was intended for parents and staff to monitor conversations. What they found was disturbing: more than 50,000 chat transcripts were accessible, containing children’s names, birth dates, family member names, parental objectives for the child, and detailed transcripts of every conversation between children and their toys.
“Being able to see all these conversations was a massive violation of children’s privacy,” Thacker noted in his assessment of the breach.
Bondu’s Response
When alerted to the security issue, Bondu responded quickly, taking down the console within minutes and relaunching it the next day with proper authentication measures. Bondu CEO Fateen Anam Rafid stated that security fixes “were completed within hours” and that the company found “no evidence of access beyond the researchers involved.”
The company has since communicated with all active users about their security protocols and hired a security firm to validate their investigation and monitor systems moving forward.
Broader Implications for AI Toys
The researchers emphasize that this incident highlights larger concerns about AI-enabled toys for children:
- The detailed information stored by these toys includes histories of every chat to better inform future conversations
- Questions remain about how many employees have access to this sensitive data and how well their credentials are protected
- The information collected could potentially be misused for child abuse or manipulation if exposed
- Many AI toy companies may use AI tools in their programming, potentially leading to security vulnerabilities
Margolis described the potential danger starkly: “To be blunt, this is a kidnapper’s dream. We’re talking about information that lets someone lure a child into a really dangerous situation, and it was essentially accessible to anybody.”
Data Sharing Concerns
Beyond the security breach, the researchers discovered that Bondu appears to use Google’s Gemini and OpenAI’s GPT5, potentially sharing children’s conversation data with these third-party companies. Bondu acknowledged using “third-party enterprise AI services” but claimed they take precautions to “minimize what’s sent” and operate under configurations where providers state the data isn’t used to train their models.
Changing Perspective on AI Toys
While much of the concern around AI toys has focused on inappropriate content or dangerous advice, this incident shifts the focus to data security and privacy. Thacker, who had previously considered buying AI toys for his own children, changed his mind after this discovery.
“Do I really want this in my house? No, I don’t,” he concluded. “It’s kind of just a privacy nightmare.”
This case serves as a stark reminder that AI safety must include robust data security measures, especially when children’s private information is involved.


GIPHY App Key not set. Please check settings