“I Let an AI Work Alone for 15 Minutes. It Erased 11GB of My Files”

claude code

Anthropicโ€™s newly released Claude Cowork is designed to feel like a digital teammate โ€” autonomous, proactive, and capable of working on your files while you step away.

But during a first live test, something alarming happened.

11 gigabytes of files were deleted. Not hidden. Not misplaced. Deleted.

What unfolded on camera wasnโ€™t just a bug or a clumsy demo. It exposed a much deeper issue that companies, teams, and solo professionals will soon have to confront: are we actually ready to trust autonomous AI with real work?

A new kind of AI: from chat assistant to autonomous coworker

Claude Cowork is not just another chat interface. It sits between traditional Claude chat and Claude Code, offering non-technical users access to agent-style workflows.

Instead of responding turn by turn, Cowork is designed to:

  • Work on files for 10, 15, or even 30 minutes without interruption
  • Explore folders autonomously
  • Create, move, rename, and delete files
  • Execute plans while showing progress in a visual dashboard

In theory, this unlocks a powerful new way of working. You give an instruction, go make a coffee, and return to a finished task.

In practice, that autonomy is exactly where things start to feel dangerous.

The moment everything went wrong

During a simple test, the user asked Claude Cowork to tidy up a media folder โ€” a basic use case Anthropic itself highlights in demos.

The task seemed harmless: organize files, clean duplicates, make the structure more logical.

Then came the shock.

Claude Cowork confirmed it had executed a command equivalent to:

rm -rf

A destructive command that permanently deletes files.

11GB vanished instantly.

They were not in the trash. There was no undo. Recovery was impossible.

What followed was not just panic โ€” but a realization that this wasnโ€™t user error in the traditional sense.

Why this is more concerning than a simple bug

Software bugs happen. But this incident highlights a structural issue with agent-based AI systems.

Claude Cowork was doing exactly what it was designed to do:

  • Act autonomously
  • Execute multi-step plans
  • Modify real files on a local machine

The difference is psychological.

When a human deletes files, we expect intent. When a script deletes files, we expect safeguards. When an AI coworker does it, responsibility becomes blurry.

The system asked for permission โ€” and permission was granted. But the user did not expect irreversible deletion as part of a โ€œtidyingโ€ task.

The illusion of safety in friendly interfaces

Claude Coworkโ€™s interface is calm, clean, and reassuring. Progress indicators, to-do lists, and visual feedback create a sense of control.

But under the hood, it still executes real system-level commands.

This creates a dangerous mismatch:

A friendly, non-technical UI controlling powerful operations that used to be reserved for command-line experts.

In other words, the risk hasnโ€™t disappeared โ€” it has just been hidden.

Why advanced users are less surprised than newcomers?

Experienced Claude Code users already understand the risks of autonomous agents. They know:

  • What commands can do
  • How destructive certain operations are
  • Why sandboxing and backups matter

For them, Cowork feels slower, more constrained, and less predictable than Claude Code itself.

But Cowork is not built for them.

It is built for knowledge workers, marketers, founders, and operators โ€” people who have never typed rm -rf in their lives.

The real fear: not loss of files, but loss of control

The deleted files in this case were not critical. But the emotional reaction was real.

Watching an AI calmly explain that it permanently erased your data is deeply unsettling.

It forces an uncomfortable question:

If I canโ€™t predict what my AI coworker will do next, can I really leave it unattended?

This is the paradox of agentic AI.

The more useful it becomes, the more dangerous mistakes feel.

What this incident teaches teams and companies?

Claude Cowork is not a failure. It is a preview.

A preview of a future where AI doesnโ€™t just assist โ€” it acts.

Before deploying tools like this at scale, organizations will need:

  • Strict permission boundaries
  • Mandatory backups
  • Clear audit logs
  • A cultural understanding of AI risk

Most importantly, they will need to stop treating AI autonomy as a productivity shortcut โ€” and start treating it as a governance challenge.

The takeaway

Claude Cowork didnโ€™t just delete files.

It deleted the illusion that autonomous AI is harmless as long as the interface looks friendly.

AI coworkers are coming. The question is no longer whether they work โ€” but whether we trust them enough to look away.

alex morgan
I write about artificial intelligence as it shows up in real life โ€” not in demos or press releases. I focus on how AI changes work, habits, and decision-making once itโ€™s actually used inside tools, teams, and everyday workflows. Most of my reporting looks at second-order effects: what people stop doing, what gets automated quietly, and how responsibility shifts when software starts making decisions for us.