
OpenAI has reported a dramatic 79-fold increase in child exploitation incident reports to the National Center for Missing & Exploited Children (NCMEC) during the first half of 2025 compared to the same period in 2024, according to a recent company update.
Key Findings and Context
During the first half of 2025, OpenAI submitted 75,027 CyberTipline reports concerning 74,559 pieces of content. This represents a substantial increase from the 947 reports about 3,252 content pieces submitted during the same period in 2024.
OpenAI spokesperson Gaby Raila attributed this increase to several factors, including enhanced review capacity implemented in late 2024 to accommodate user growth, and the introduction of more product features allowing image uploads.
The company noted that this spike mirrors broader NCMEC observations regarding generative AI, which saw a 1,325 percent increase in related reports between 2023 and 2024.
Industry Context and Regulatory Scrutiny
This update comes amid heightened scrutiny of AI companies regarding child safety issues. In recent months, OpenAI and competitors have faced pressure from multiple fronts:
- 44 state attorneys general sent warnings to AI companies including OpenAI, Meta, Character.AI, and Google
- Multiple lawsuits have been filed against AI companies by families alleging chatbot involvement in children’s deaths
- The US Senate Judiciary Committee held hearings on AI chatbot harms
- The Federal Trade Commission launched a market study on AI companion bots with a focus on child safety
OpenAI’s Safety Initiatives
In response to these concerns, OpenAI has implemented several safety measures:
- Parental controls for ChatGPT, allowing parents to link accounts with their teens
- Options to disable voice mode, memory, and image generation for teen accounts
- Potential notifications to parents about conversations indicating self-harm
- Agreements with the California Department of Justice to mitigate risks to teens
- Release of a Teen Safety Blueprint focused on detecting and reporting child exploitation material
Understanding the Statistics
The report highlights the nuanced nature of NCMEC reporting statistics. Increased reports may reflect changes in automated moderation or reporting criteria rather than necessarily indicating more illicit activity. Additionally, multiple reports can reference the same content, and single reports can cover multiple content pieces.
The update does not include any reports related to OpenAI’s video-generation app Sora, as its September release fell outside the timeframe covered.
Conclusion
While the dramatic increase in reports demonstrates OpenAI’s enhanced detection and reporting capabilities, it also underscores the growing challenges of content moderation as AI tools become more widespread and accessible. The company’s response reflects the industry’s evolving approach to balancing innovation with safety, particularly regarding vulnerable populations like children.


GIPHY App Key not set. Please check settings