strategy Ler em Português →

The 23% Problem: Why Most AI Automation Doesn't Justify Itself

Rodrigo Zerlotti · April 14, 2026 · 4 min read

Twenty-three percent.

That's the number most leadership teams don't want to hear after signing AI contracts, hiring digital transformation consultants, and announcing ambitious automation roadmaps.

Only about 23% of typical company work tasks justify AI automation today, according to McKinsey analysis and operational data from companies that have deployed agents at scale. Not because the technology fails. Because the remaining tasks don't pass basic economic viability criteria.

The problem isn't AI. It's the expectation that it works for everything.

Most companies don't have an automation strategy. They have an automation budget and a list of tasks that seem annoying enough to delegate to a system. That's not strategy. It's resource allocation dressed up as modernization.

The result is predictable: automations that save three hours a week while consuming 40 hours of setup, maintenance, and error correction. Workflows that function 80% of the time and break down on the 20% that actually matters. Systems that require constant supervision to produce output a junior human would deliver with half the attention.

This isn't technological failure. It's a diagnostic failure.

The Viability Gap

Before deciding what to automate, there's a filter that separates tasks that benefit from AI from those that simply resist it. Four criteria determine this separation.

First: structural repeatability. Does the task follow a pattern consistent enough for a trained system to reliably recognize inputs and produce outputs? Not surface-level repetition; structural repetition. Answering emails seems repetitive. But each email carries relational context and nuance that varies enormously. Processing invoices within a standardized template is structurally repetitive.

Second: error tolerance. What's the cost of a mistake? Miscategorizing expenses is recoverable. Sending a sales proposal with the wrong price can cost a client. AI systems make errors. The criterion isn't whether errors occur; it's whether errors are tolerable in the context of the task.

Third: data availability for training and verification. Can the task be evaluated objectively? Does a baseline of correct outputs exist to calibrate the system? Without quality data, there's no quality automation; just the illusion of automation.

Fourth: cost of human judgment. How much of the task's value resides in judgment that depends on implicit context, human relationships, situational ethics, or genuine creativity? The higher this component, the greater the resistance to productive automation.

Tasks that pass all four filters comprise, on average, 23% of work in services and SaaS companies. In logistics and manufacturing operations, the number can reach 40%. In strategic consulting and high-value creation, it falls below 10%.

Why 77% Resist

The most common mistake is confusing "technically possible" with "economically justified."

AI agents today reliably handle: first drafts of code, support ticket triage, document classification, structured data extraction, scheduling, and communication routing. These are tasks with high structural repeatability, reasonable error tolerance, and sufficient training datasets.

What agents still do unreliably: negotiate in high-ambiguity contexts, maintain long-term relationships with emotional nuance, make ethical judgments in unforeseen situations, generate genuinely novel conceptual connections, and adapt communication to complex interpersonal dynamics.

These tasks don't resist automation because the technology is incapable. They resist because error costs are high, structural variability is large, and the value resides precisely in the human judgment the task demands.

Forcing automation into these areas doesn't increase efficiency. It increases supervision costs and creates a productivity theater that erodes internal trust.

Identifying Your 23%

The right diagnosis starts with mapping work, not tools.

List the hundred tasks that consume the most hours in your operation. For each one, apply the four viability gap criteria. Score from 1 to 3 on each dimension. Tasks with a total score above 9 are serious automation candidates.

Then calculate real ROI, not projected ROI. Real ROI includes: setup and integration time, ongoing maintenance costs, supervision costs, costs of undetected errors, and retraining costs when the process changes. Subtract these from time saved. What remains is net benefit.

In most diagnostics run with operators, 15% to 30% of mapped tasks pass the filter. This range converges toward 23% not by accident; it's where current technology meets the operational reality of most companies.

The Real Competitive Advantage

Companies that automate everything waste time and capital on systems that require constant support. Companies that automate nothing lose operational capacity to more agile competitors.

The advantage is in the middle: knowing precisely what belongs in your 23%.

That precision isn't trivial. It requires honest diagnosis of what is structurally repeatable versus what appears repetitive but carries hidden variability. It requires willingness to not automate tasks that seem like obvious candidates. It requires a culture that separates real modernization from innovation theater.

It's not about how much you automate. It's about what you choose to automate.

A company that automates the right 23% with surgical precision operates with a structural advantage over competitors that implement automation indiscriminately. The first group has systems that work. The second group has dashboards that impress until the real process needs to scale.

The gap between these two positions will widen over the next two years, not narrow. Models become more capable every cycle. That gradually expands the 23%; but the ability to precisely identify where that percentage sits in each specific operation remains human judgment.

That clarity, today, is what separates automation as advantage from automation as cost.


Zerlotti exists for operators who know the difference between automation that works and automation that performs is in the diagnosis, not the tool.

Share
← All articles Ler em Português →