AI tools don’t fail because they lack capability. They fail because using them requires a cognitive shift that people don’t sustain. This is the what I call the Interface Tax in AI.
The Illusion of Capability
Most AI tools are impressive.
They generate text, summarize documents, analyze data, produce visuals, and increasingly behave like capable collaborators. In isolation, many of them work surprisingly well. This is why demonstrations are so convincing. In a controlled setting, with a clean prompt and a clear objective, the output often feels like a step-change improvement. Faster, better, more flexible.
The conclusion seems obvious: this should make people significantly more productive.
And yet, in real work environments, that outcome is far less consistent. The tools are used, but not relied on. They are explored, but not embedded. They are appreciated, but not trusted as part of the core workflow. The capability is there. The impact is not.
What gets lost between those two states is not technology. It is fit.
Because real work is not a sequence of isolated tasks. It is a continuous flow of context, intent, and partial progress. Any tool that breaks that flow introduces friction that no feature can compensate for.
The Interface Tax
That gap is rarely explained by model quality or missing features. More often, it is explained by something far less visible: the cost of interaction.
Using AI rarely happens in isolation. It sits inside a broader workflow. You move between tools, reframe intent, copy information across boundaries, and then try to bring the result back into the system where work actually happens. Each individual step feels negligible. Together, they accumulate into something that is anything but.
This is what I think of as the Interface Tax: the cognitive and operational cost required to access and integrate a capability into real work.
Unlike licensing or infrastructure, this cost is not measured. But it is paid continuously by the people expected to use the tool. Not once, but every time they try to make it useful.
And like any tax, it compounds. Not linearly, but exponentially, because it fragments attention, interrupts flow, and erodes trust in the tool itself.
In my own work, I see this in two very simple ways. First, I forget tools exist. Not because they are bad, but because I never found a natural place for them in my workflow. If I have to think about when to use something, I usually don’t. Work gets in the way.
Second, I abandon tools when they add to the challenge. Early Copilot promised to draft presentations I could refine. In practice, I spent more time fixing what it produced than doing it myself. The same happened when I first tried to use Synthesia’s tool to use a prompt to generate structure for a larger video. The result was unusable. Even though the product improved later, I never went back to that approach.
The interface tax is not just friction. It is memory, trust, and habit breaking down at the same time.
The Cognitive Shift That Breaks Adoption
The real problem is not just the number of steps. It is the mental shift required to perform them.
Most AI tools force users to step out of their current workflow. They interrupt momentum, require translation of intent into prompts, and return output in a form that still needs interpretation. This is not just interaction overhead. It is a break in how thinking flows.
And that break is where adoption starts to fail.
Because tools that require a conscious decision to use tend to remain optional. And optional tools, in environments where attention is scarce and pressure is high, quietly disappear.
You cannot build reliable productivity on something that depends on remembering. If a tool requires the user to think, “I should use this now,” it is already competing with everything else that demands attention.
Real productivity tools do not behave like that. They become part of habit. They disappear into the workflow. They are not triggered consciously; they are simply there.
The moment a tool requires intention instead of enabling intention, it starts losing.
When Capability Gets Shoehorned Into Work
There is a familiar pattern that appears whenever a new capability emerges. Organizations recognize its potential and attempt to integrate it broadly. The logic is straightforward: if it is powerful, it should be available everywhere.
This was visible in many early Copilot-style implementations. AI was embedded across documents, presentations, emails, and spreadsheets. On paper, this looked like progress.
In practice, something else happened. The outputs were often generic, misaligned with context, and not immediately usable. Which meant that people had to do additional work to make them useful. They had to reinterpret, rewrite, or simply discard what was produced.
The result was predictable. The feature existed everywhere, but usage remained low. Not because people resisted AI, but because the interface increased effort instead of reducing it.
When integration increases cognitive load, adoption collapses.
This is the classic failure mode of treating capability as a layer instead of as part of the workflow.
Why “Adding a Tool” Requires Commitment
There is a subtle but important misconception in how organizations approach new tools. They assume that adding a capability to the toolkit is enough. That once something is available, people will naturally incorporate it into their work.
In reality, adding a tool requires a decision. Not just a technical one, but a behavioral one.
It requires people to change how they work, to accept a different interaction model, and to trust that the effort of adaptation will pay off. That threshold is not crossed through availability. It is crossed through experience.
This is where “wow moments” matter.
In a session at Synthesia Live, I argued that adoption does not start with enablement. It starts with a moment where the user experiences something that was previously impractical becoming suddenly easy. A proposal that would have taken hours to record manually becomes a structured, high-quality video in minutes.
But even then, that is not enough. If that capability is not embedded into the workflow where proposals are actually created, it remains an occasional trick. Impressive, but not repeatable.
I have seen the opposite failure from a different angle. Tools like Coda.io were incredibly powerful in isolation, but required constant switching because they were not properly integrated with my generative AI tool. The result was friction. Even when alternatives like Notion offered integration, reliability issues broke trust. The outcome was not migration. It was regression. Back to Excel.
You do not adopt a tool. You redesign the path to an outcome.
Where Productivity Quietly Disappears
AI is often described as a way to reduce effort. In practice, it frequently redistributes it.
Instead of doing the work directly, users prepare inputs, manage tool boundaries, validate outputs, and reconstruct context. None of these actions are difficult on their own. But they fragment attention.
And fragmentation carries a cost that is rarely captured in metrics. Every interruption forces a reset. The user has to rebuild their understanding of the task, re-evaluate the direction, and decide whether the output can be trusted.
The tool may save effort in isolated steps. But it disrupts the continuity required for meaningful progress.
This is why many AI tools feel both powerful and exhausting at the same time.
Because they reduce effort per action, while increasing effort per workflow.
What Good Actually Looks Like
The contrast becomes clear when the interface tax is low enough to disappear.
In those cases, the workflow does not feel interrupted. It evolves. Intent flows into execution, and execution feeds directly into iteration. Context is preserved. Output is immediately usable. The user does not need to step out of their thinking to make the tool work.
In my own work, this is what determines whether I use a tool consistently or not.
Before setting up my current workflow, content creation felt unnecessarily manual. Copying, pasting, reformatting. In a world where generative AI exists, that kind of friction feels irrationally frustrating.
Now, using a custom MCP that allows generative AI to interact with this very website, the interaction feels different. It feels less like using a tool and more like working with an assistant. One that understands context, carries it forward, and builds on it.
The process looks like this:
- I suggest initial topics based on patterns I observe in my work.
- The system expands those ideas into structured directions, adding depth and alternative angles.
- I refine the direction, adding constraints, experience, and positioning.
- The system asks targeted questions to extract specific examples and real-world context.
- I respond once, in context, without restarting the process.
- The system generates a structured draft directly in my environment.
There is no repeated transfer of information. No need to rebuild context. No fragmentation between ideation, refinement, and execution.
The moment this clicked for me was when the output started to feel like an extension of my own thinking. Not generic. Not detached. But aligned with how I would structure and express the idea myself.
That is the point where trust emerges. And without trust, there is no adoption.
Why Interface Tax Scales Worse in Enterprises
At an individual level, interface tax is frustrating. At an organizational level, it becomes destructive.
In enterprise environments, workflows are rarely linear. They span multiple tools, teams, and layers of responsibility. Information moves across systems. Decisions are distributed. Context is fragmented by design.
In that environment, even small inefficiencies compound quickly.
A single additional step is not just one extra action. It is multiplied across roles, across handovers, across repeated processes. What feels like a minor inconvenience in isolation becomes a structural drag on the entire system.
I have seen this play out in organizations that invested heavily in building their own internal “companyGPT” solutions. The tools were stable, well-developed, and secure. But employees had to remember a separate URL, deal with limitations compared to tools they used privately, and continuously bridge the gap between systems.
The result was predictable. Usage looked acceptable on paper. In reality, value remained limited.
Because the problem was never access. It was integration.
What feels like a usability issue at the edge becomes an operating model problem at scale.
And once that happens, adding more capability does not solve the issue. It amplifies it.
From Capability to Leverage
AI does not transform work by making tasks possible. It transforms work by making them natural enough to repeat.
That threshold is not determined by raw capability. It is determined by whether the interface preserves flow, reduces friction, and removes the need for conscious activation.
Until that threshold is crossed, most AI tools remain in an awkward position. They are impressive, but underused. Available, but not relied on. Powerful, but peripheral.
The Real Constraint
Organizations often assume that the limiting factor is capability, access, or training. In many cases, it is none of these.
The real constraint is whether a tool fits into how work actually happens. Whether it aligns with human behavior. Whether it reduces cognitive overhead instead of introducing it.
The interface tax determines that outcome.
And until it is addressed, most AI investments will continue to deliver less impact than expected. Not because they do not work, but because they do not fit.



