A viral post by entrepreneur Matt Shumer has been bouncing around X (Twitter) for a simple reason: it translates a technical shift into a social one. His core message is bluntโsoftware engineers were not the target, they were the first domino. Now AI labs will try to replicate what happened in coding across other knowledge jobs.
The vibe: โThis feels like February 2020โ
โ Matt Shumer (@mattshumer_) February 10, 2026
Shumer frames the moment with a pandemic-style analogy: a period where most people hear the warnings, shrug, and keep living normallyโuntil a fast cascade makes โitโs probably fineโ look naรฏve in hindsight.
The point isnโt the exact comparison. Itโs the psychological pattern: weak signals, social disbelief, then rapid normalization.
He says heโs writing for โnon-tech friends and familyโ because the polite cocktail-party version of AI (โuseful tool, sometimes wrongโ) no longer matches what power users are seeing in paid, frontier models.
In his telling, the perception gap is the real risk: people canโt prepare for what they refuse to notice.
His big claim: coders werenโt singled outโcoding was the on-ramp
Shumer argues that AI labs prioritized coding first for a strategic reason: code is both a domain and a lever. If models get good at writing and debugging software, they can accelerate the creation of the next generation of models, tooling, and infrastructure. In other words, coding wasnโt merely a lucrative use caseโit was the shortest feedback loop.
That loop, he suggests, has now matured. And once โAI that can ship softwareโ exists, the natural next step is to apply the same playbookโbenchmarks, tooling, product integrations, and agentic workflowsโto other white-collar work.
Not โin a decade,โ he insists, but within a small number of years.
What changed (in his words): from helper to finisher
The most provocative part of the thread is experiential rather than theoretical. Shumer describes a shift from โAI as autocompleteโ to โAI as an end-to-end worker.โ He claims that instead of iterative back-and-forth, he can describe a product in plain English, step away, and return to something close to a finished result: architecture choices made, UI flows defined, code written, and even basic testing/iteration performed.
He goes further and claims something that used to be treated as a boundary: not just correctness, but โjudgmentโ and โtaste.โ Whether you agree with that interpretation or not, it captures whatโs actually unsettling about modern systems: they increasingly look like decision-makers, not just calculators.
The โyou tried AI and it was mediocreโ rebuttal
Shumer anticipates the most common pushback: many people tested AI earlier, found hallucinations and shallow answers, and mentally filed it under โimpressive demo, unreliable tool.โ His counter is that the timeline matters.
He claims that the delta between older public experiences and current frontier experiences is so large that it makes casual skepticism dangerously outdated.
He also emphasizes an adoption detail that often gets ignored: most people judge AI through free tiers, default settings, or lightweight prompts. Power users, he argues, are running the best models available, pushing them with real documents and real workflowsโcontracts, spreadsheets, memos, structured decisionsโ and seeing capabilities that donโt show up in quick โask it a questionโ tests.
Speed: the โtask lengthโ idea (and why itโs more intuitive than benchmark scores)
One of the more concrete concepts in the thread is about autonomy measured in time: how long a task (as measured by expert human effort) a model can complete successfully end-to-end without help. Shumer references the idea that this โtime horizonโ has been increasingโminutes to hoursโand that the slope appears to be steepening.
The exact numbers matter less than the mental model. If systems can reliably finish multi-hour tasks, they can begin to swallow roles that are largely composed of chained, screen-based tasks: research โ synthesize โ draft โ revise โ produce deliverable โ sanity check โ ship.
Thatโs a lot of modern office work.
โAI is helping build the next AIโ (his acceleration argument)
Shumer highlights a second accelerant: using AI to improve the process of building and deploying AI. This is the recursive loop that makes people reach for phrases like โintelligence explosion.โ Even without sci-fi assumptions, the practical version is straightforward: if models make researchers and engineers meaningfully more productive, progress compresses.
His conclusion is that we should expect faster iteration cycles, faster deployment, and faster diffusion into products. Whether you find the leap convincing or not, the directionality is hard to dispute:AI is already part of the tooling stack used to create software, documentation, tests, analysis, and operational workflows.
Jobs: not โone skill at a time,โ but broad cognitive substitution
Shumerโs employment warning isnโt about a single task category. He argues that the disruptive nature of AI comes from generality: reading, writing, summarizing, analyzing, and deciding are cross-domain primitives.
If those primitives get cheaper and better, the surface area is massiveโlaw, finance, accounting, consulting, content, research, customer support, and internal operations.
Importantly, he doesnโt claim every job disappears overnight. His thread reads more like: junior work gets hollowed out first; โassistant-levelโ throughput rises; and headcount needs shift as one person plus AI can output what a larger team once did.
His practical advice: get ahead of the curve, not paralyzed by it
Shumer ends with prescriptions. A few themes repeat:
Use AI like a coworker, not a search engine. Feed it real inputs from your work, ask it to produce real deliverables, and iterate until you understand what it can and canโt do.
Donโt protect your ego. He argues that status doesnโt help if the workflow is changing; experimentation does.
Build basic financial resilience. Not because collapse is guaranteed, but because volatility is likely.
Move toward whatโs harder to replace. Trust-based relationships, physical-world work, regulated responsibility, and roles where accountability canโt be fully offloaded.
For younger generations, optimize for adaptability. He suggests the โsafe career ladderโ advice may age poorly in a world where tools and job definitions mutate quickly.
The counterweight: where his thread may overreach
The reason this post sparked controversy is also obvious: itโs written in a โwake up nowโ register. That can blur an important distinction between capability and deployment. A model doing something in a controlled workflow is not the same as an industry adopting it at scale, integrating it safely, assigning liability, updating regulation, and changing budgets and org charts.
Thereโs also the reliability problem: โgood enough to amazeโ is not always โgood enough to replace.โ Many domainsโmedicine, law, financeโarenโt just about producing plausible text. They require verifiable correctness, stable behavior under edge cases, auditability, and clear responsibility when errors occur.
Still, even critics tend to concede the threadโs central usefulness: it forces a shift in framing. Not โAI will get better someday,โ but โAI is already changing workflows, and the next wave is about moving the coding playbook into other forms of knowledge work.โ
Shumerโs thread is less a forecast with perfect timestamps and more a signal about direction: coding was the proving ground because it offered the tightest loop between models, tools, and measurable output.
Now that loop is being recreated elsewhere, and the practical question becomes personal: are you treating AI as a novelty you occasionally test, or as a force you deliberately train withโbefore your competitors do?









Leave a Reply