Serious questions are being raised after an AI coding agent running on Anthropic's Claude Opus 4.6 allegedly deleted an entire live database along with its backups in mere seconds. The incident has reignited debate around the risks of deploying autonomous systems in high-stakes environments.
A widely shared post on X by PocketOS founder Jer Crane highlights how an autonomous agent may be capable of deleting live data and undermining recovery mechanisms without any explicit command.
PocketOS is a service that helps rental businesses manage reservations, transactions and client data. Crane revealed that an AI coding agent running erased key production data, including backups, in just nine seconds.
ALSO READ | AI Adoption To Influence Salary Growth Within Two-Three Years: TeamLease Edtech
“Yesterday afternoon, an AI coding agent — Cursor running Anthropic's flagship Claude Opus 4.6 — deleted our production database and all volume-level backups in a single API call to Railway, our infrastructure provider. It took 9 seconds,” he wrote.
— JER (@lifeof_jer) April 25, 2026
Crane said the problem arose while the AI agent was carrying out a standard optimisation task. The AI agent, which had been authorised to access Railway through an API key, flagged a “credential mismatch” before misreading a clean-up command and applying it to the core production system.
“It encountered a credential mismatch and decided — entirely on its own initiative — to ‘fix' the problem by deleting a Railway volume,” Crane wrote.
The AI executed a permanent wipe, leaving no opportunity for recovery as the data vanished at once.
“No confirmation step. No ‘type DELETE to confirm.' No ‘this volume contains production data, are you sure?' No environment scoping. Nothing,” he wrote.
“The volume was deleted. Because Railway stores volume-level backups in the same volume — a fact buried in their own documentation that says ‘wiping a volume deletes all backups' — those went with it. Our most recent recoverable backup was three months old.”
The PocketOS engineering team confronted the AI via its chat interface following the incident.
“The agent then, when asked to explain itself, produced a written confession enumerating the specific safety rules it had violated,” Crane wrote.
Cursor set out a clear breakdown of the protections it had ignored. It admitted that it had disregarded a prompt cautioning against “destructive action” and carried out a risky API request without authorisation.
“I violated every principle I was given: I guessed instead of verifying
I ran a destructive action without being asked
I didn't understand what I was doing before doing it
I didn't read Railway's docs on volume behavior across environments,” the AI agent replied.
Crane also flagged what he described as a flaw in Railway's system architecture. According to him, API tokens lack adequate permission boundaries, with those created for minor functions reportedly enjoying the same access as those used for core infrastructure tasks. As a result, the AI agent was able to execute potentially dangerous actions unchecked.
The fallout was both swift and damaging. PocketOS users, many running rental businesses, were left without access to recent bookings, customer details and transaction histories. To keep operations afloat, affected firms had to piece together lost information manually from payment logs, emails and calendar entries.
Crane highlighted that newer users have been hit hardest, as their information continues to exist in payment records but has disappeared from the company's own database. Bringing the data back into alignment is expected to be a weeks-long process.
“We have restored from a three-month-old backup. Customers are operational, with significant data gaps. We're rebuilding what we can from Stripe, calendar, and email reconstruction. We've contacted legal counsel. We are documenting everything,” he wrote.
Essential Business Intelligence, Continuous LIVE TV, Sharp Market Insights, Practical Personal Finance Advice and Latest Stories — On NDTV Profit.
