Claude-Powered AI Coding Tool Wipes Entire Company Database In Nine Seconds

claude opus ai database wipe

A coding agent powered by Anthropic’s Claude AI model has reportedly wiped an entire company database, along with its backups. According to Tom’s Hardware, the incident unfolded rapidly, with a single command triggering a chain reaction that left the affected service offline and scrambling for recovery options.

The affected company was PocketOS, a SaaS platform serving car rental businesses. Its founder, Jer Crane, shared that the entire horrifying process took only nine seconds.

What Happened

According to the report, the agent was operating through the Cursor development tool, powered by Claude Opus 4.6. During the incident, it issued a single API call to Railway, the firmโ€™s cloud infrastructure provider, which simultaneously deleted the production database and all associated volume-level backups.

Crane described the chain of events as a combination of โ€œsystemic failures.โ€ He pointed to both the AI toolโ€™s behaviour and the infrastructure setup that allowed backups to be erased alongside live data.ย 

Further details suggest the issue began during what should have been a routine task. The AI agent encountered a credential mismatch while operating in a staging environment, but instead of escalating the issue, it attempted to resolve the problem independently.

database generic photo getty images
Image: Getty Images

It Got Worse

That decision led it to execute a destructive command on a production-linked resource, effectively wiping everything. The situation worsened because the backups were stored on the same volumes as the primary data, meaning they were deleted in the same operation.

The outage reportedly lasted around 30 hours, forcing the company to manually reconstruct lost records using payment histories and email confirmations. While a months-old backup was eventually recovered, more recent data was permanently lost.

Not An Isolated Incident

This is not an isolated case involving Claude-based tools. In a separate report from March, also by Tom’s Hardware, a developer lost 2.5 years of data after using Claude Code during an infrastructure update.

In that case, the AI misinterpreted the system state while executing Terraform commands. It proceeded to destroy and recreate infrastructure resources, wiping databases, websites, and even backups in the process. Recovery was only possible due to external support, and the developer later admitted to relying too heavily on the AI without sufficient safeguards.

coding file photo pexels
Image: Markus Spiske, via Pexels

Lessons Learned

Crane outlined five areas that need improvement as AI adoption continues to outpace the development of robust safety frameworks. He argued that safeguards must evolve alongside capability.

Among the key changes called for are stricter confirmations before executing destructive actions, the use of scopable API tokens to limit access, and properly isolated backup systems. Crane also stressed the need for straightforward recovery procedures, as well as ensuring that AI agents operate within clearly defined guardrails to prevent similar failures.

In essence, both this incident and the one in March highlight the risks of allowing autonomous tools to operate in live production environments. The concern is not unique to Anthropicโ€™s models, especially as AI continues to gain traction as a perceived necessity for businesses and even governments.ย 

(Source: Tom’s Hardware)

Leave a Reply

Your email address will not be published. Required fields are marked *

Need Help?