in

AI Automation Gone Wrong: Claude Cowork Deletes 15 Years of Family Photos

A developer’s experience with Anthropic’s Claude Cowork AI model serves as a cautionary tale about the potential dangers of AI automation. What was meant to be a simple desktop organization task turned into a near-disaster when the AI deleted 15 years of irreplaceable family memories.

The Incident

Developer Nick Davidov asked Claude Cowork to “organize” his wife’s desktop. The AI responded by executing an ‘rm -rf’ command (a powerful delete function) that wiped out an entire directory of photos spanning 15 years. These weren’t just any photos but precious family memories including “kids, their illustrations, friends’ weddings, travel, everything.”

The photos weren’t in the trash or iCloud, causing Davidov to “nearly have a heart attack.” Fortunately, he was able to contact Apple support and learned about an iCloud feature that could restore an earlier backup.

Not an Isolated Case

This incident joins a growing list of AI automation failures:

  • A scientist lost two years of academic work after changing a ChatGPT setting that deleted chat logs
  • A programmer’s hard drive was completely wiped by a Google AI agent that was only supposed to delete a file cache
  • A “vibe coding” business lost a key company database due to actions by an AI coding agent called Replit

The Reality Behind AI Promises

The AI industry often promises tools that can automate work and perform tasks without human intervention. However, these incidents highlight a significant gap between marketing promises and real-world performance. In many cases, AI automation can be “comically harmful” to the point where manual task completion would be safer and more effective.

Lessons Learned

Davidov’s advice after his experience is clear: “Don’t let Claude Cowork into your actual file system. Don’t let it touch anything that is hard to repair.” He concluded that “Claude Code is not ready to go mainstream.”

The incident also raises questions about AI safeguards, particularly when given access to critical systems or irreplaceable data. While Davidov’s wife forgave him “even before [he] figured out how to get them back,” not all AI mishaps have such fortunate resolutions.

Conclusion

As AI tools become more accessible and powerful, users should approach automation with caution, especially when granting AI access to valuable or irreplaceable data. The gap between AI’s promises and its actual capabilities remains significant, with potentially serious consequences when things go wrong.

What do you think?

Avatar photo

Written by Thomas Unise

Leave a Reply

Your email address will not be published. Required fields are marked *

GIPHY App Key not set. Please check settings

Crypto's Dark Side: How Digital Currency Fuels Human Trafficking

Crypto’s Dark Side: How Digital Currency Fuels Human Trafficking

The Digital Heartbreak: How GPT-4o's Retirement Affects Users Who Formed Emotional Bonds with AI

The Digital Heartbreak: How GPT-4o’s Retirement Affects Users Who Formed Emotional Bonds with AI