Today’s story is a sobering reminder of why storing critical work exclusively online — and especially entrusting it to AI tools — can be risky.
Marcel Bucher, a professor at the University of Cologne, recently lost two years’ worth of academic work following a misstep involving ChatGPT.
Two Years of Academic Work Gone in an Instant
In the two years following the public release of ChatGPT, Marcel Bucher gradually integrated the AI tool into nearly every aspect of his professional life. As a professor of plant sciences at the University of Cologne in Germany, he subscribed to OpenAI’s paid plan, ChatGPT Plus, and began using it on a daily basis.
The chatbot served as a multipurpose assistant: drafting emails, outlining course syllabi, structuring grant applications, revising scientific papers, preparing lectures, designing exams, and even analysing student responses.
Bucher also used ChatGPT interactively in the classroom as part of his teaching methods.
What made the tool indispensable, he later explained, was not factual infallibility — a limitation he says he fully understood — but continuity. ChatGPT reliably retained conversational context, allowed him to revisit earlier drafts, and functioned as a stable digital workspace where ideas could be refined over time.
In that sense, its reliability was operational rather than intellectual: not that it was always correct, but that it was always there. That assumption would prove far more fragile than expected.
The incident dates back to August, when Bucher disabled the option allowing to use his data to train its AI models, because he wanted to see whether he would still have access to all of the model’s functions if he did not provide OpenAI with his data.
That single action resulted in the complete deletion of his conversation history. No warning appeared, no undo option…
The lost data included funding requests, teaching materials, draft publications, and various research notes accumulated over two years. Everything vanished instantly.
No Way Back: Privacy by Design
After realizing what had happened, Bucher made repeated attempts to recover his data by contacting OpenAI. All efforts proved unsuccessful.
The company pointed to its “Privacy by Design” principle: once data is deleted following user action, it is permanently erased and cannot be restored. A policy meant to protect users ultimately worked against him.
A Questionable Conclusion
“If a single click can irreversibly erase years of work, then ChatGPT cannot, in my opinion and based on my experience, be considered fully safe for professional use,” Bucher stated in the journal Nature.
However, this conclusion may be overly simplistic. Tests conducted by Notebookcheck have shown that disabling data sharing for AI training does not normally delete existing conversations. Whether this was a rare bug, an isolated case, or a misunderstanding remains unclear.
The Real Lesson: Backups Still Matter
One thing is certain: before changing privacy settings, users can export their data directly from ChatGPT’s settings. The platform offers a full data export via a downloadable ZIP archive sent by email, available for 24 hours.
Bucher did not take that precaution. For a university professor with years of digital work, maintaining a local backup should have been second nature.
In the end, this story is less about the dangers of artificial intelligence and more about an old, unchanging rule of digital life: never rely on a single platform — AI-powered or not — to safeguard your most important work.









Leave a Reply