There is a narrative spreading across companies right now that feels both familiar and dangerously convenient. It is the idea that AI in companies will definitely improve performance. That by introducing generative models, copilots, and agents into existing workflows, organizations will somehow become faster, smarter, and more effective without fundamentally changing how they operate. It is a narrative that promises progress without confrontation, acceleration without discipline, and results without uncomfortable reflection.
I have seen this movie before.
Not with AI specifically, but with every meaningful technological shift over the past fifteen years. New tools arrive with the promise of transformation, and organizations rush to adopt them, hoping that capability alone will solve structural problems. Sometimes it works, but only in very specific conditions. More often, the technology does something far less comforting.
It exposes what was already there.
AI is no different. If anything, it is the most unforgiving version of this pattern so far. Because unlike previous technologies, it does not just introduce new capabilities. It removes friction, increases speed, and amplifies output across almost every layer of an organization. And when you amplify something, you do not change its nature. You reveal it.
That is the part many companies underestimate.
AI is not guaranteed to improve your organization.
It will expose it.
The Illusion of Progress Through AI in Companies
When companies say they are adopting AI, what they often mean is that they are adding a layer of capability on top of existing systems. A copilot here, an agent there, some automation wrapped around a workflow that was already struggling. The expectation is that this additional layer will smooth out inefficiencies, compensate for weaknesses, and unlock performance gains that were previously out of reach.
In reality, what happens is something very different.
The introduction of AI in companies does not replace the need for clear thinking, clean processes, or aligned ownership. It operates within whatever structure already exists. If that structure is coherent, the results can be impressive. If it is not, the results are amplified versions of the same problems, just faster, more polished, and harder to detect at first glance.
This is where the connection to my recent article on cognitive leverage becomes important. At an individual level, AI does not make you better by default. It mirrors and multiplies your thinking. At an organizational level, the same principle applies. AI mirrors and multiplies how your company operates.
And that is where things start to get uncomfortable.
The First Fracture: Lack of Priority
The most consistent pattern I have seen across companies is not a lack of ambition, but a lack of priority. There is always talk of commitment. Leadership teams align around the importance of transformation. Roadmaps are created, initiatives are launched, and communication flows freely about the strategic relevance of the new technology.
And then reality sets in.
The same organizations that claim to prioritize transformation are also afraid of interruption. They want to introduce AI without disrupting existing processes, without challenging existing incentives, and without forcing teams to fundamentally change how they work. AI becomes an addition, not a replacement. An extracurricular activity layered on top of already overloaded systems.
That is where the first cracks appear.
When AI is treated as something optional rather than foundational, its usage remains immature. People experiment, but they do not commit. They use it for isolated tasks, but they do not integrate it into their core workflows. They generate outputs, but they do not take responsibility for them. The result is a pattern of shallow usage that produces equally shallow outcomes.
This is not a limitation of the technology.
It is a reflection of the organization.
And it is also the first place where ethical considerations quietly emerge. When AI is used without real ownership, without clear intent, and without accountability for outcomes, the risk is not just inefficiency. It is poor judgment at scale. That is a conversation worth having in its own right, but for now it is enough to recognize that immature usage does not stay contained. It compounds.
The Second Fracture: Broken Processes and Dirty Data
One of the most tangible examples of this dynamic from my own experience has nothing to do with AI at all. It goes back to my time as Head of Product for an ERP SaaS platform used by advertising agencies to manage projects, resources, and commercial performance.
One of the core capabilities of that system was time tracking. Not because anyone particularly enjoyed it, but because it was the foundation for understanding project profitability, resource allocation, and overall business performance. Without reliable time tracking, the entire system was operating on assumptions rather than data.
The introduction of the tool did not create the need for time tracking. That requirement had always been there. What the tool did was make it visible.
And that is where things became ugly.
Many employees resisted using the system properly. Some did it out of habit, others out of frustration, and some simply because they did not see the value. They had always tracked their time in some form, but the new system made their input directly visible and measurable. It was supposed to create transparency where there had previously been ambiguity.
Instead of improving the situation, this led to a compounding series of problems. Hours were not tracked accurately, entries were delayed or skipped, and the dataset that the system relied on became unreliable. From a technical perspective, the system was functioning exactly as intended. From a business perspective, it was producing misleading insights.
Leadership relied on those insights to make decisions. Decisions about pricing, staffing, and project prioritization were based on data that appeared structured and credible, but was fundamentally flawed. The consequences were not immediate, but they were real. Margins were misunderstood, resources were misallocated, and strategic decisions were made on top of an unstable foundation.
The system did not fail.
The organization did.
The tool exposed a behavior that had always existed, but had previously been hidden behind manual processes and limited visibility. Once exposed, that behavior did not disappear. It shaped the output of the system, and through it, the decisions of the company.
This is exactly the same pattern we are now seeing with AI in companies.
Companies assume that generative models and agentic systems will somehow compensate for imperfect data and broken processes. There is a belief that probabilistic systems can “figure it out” where deterministic systems struggled. That they can smooth over inconsistencies, fill in gaps, and produce reliable outputs even when the underlying inputs are flawed.
What actually happens is far more dangerous.
AI does not fix bad data.
It hides it behind confidence.
When agents operate on top of unreliable datasets, they do not stop. They generate outputs that look coherent, plausible, and actionable. They produce summaries, recommendations, and decisions that carry the weight of structure and language, but not the substance of truth. And because those outputs are well-formed, they are more likely to be trusted.
The problem is not just that errors occur.
It is that they become invisible.
The Third Fracture: Lack of Ownership
The final pattern that consistently emerges in these situations is a lack of ownership. Not in the sense that no one is responsible for the system, but in the sense that no one is truly accountable for its outcomes.
In many organizations, failure is still something to be avoided rather than learned from. When new technologies are introduced, there is an implicit risk. If the initiative does not deliver the expected results, someone might be blamed. That creates a natural incentive to minimize exposure, to experiment cautiously, and to avoid taking full responsibility for the outcome.
AI does not remove that dynamic. It amplifies it.
When outputs are generated quickly and at scale, the question of ownership becomes even more critical. Who is responsible for the quality of those outputs? Who ensures that they are grounded in reality? Who intervenes when something goes wrong?
If the answer to those questions is unclear, the system operates without a feedback loop. Mistakes are not corrected, because no one owns them. Improvements are not made, because no one is incentivized to make them. The system continues to produce output, and the organization continues to consume it, without a mechanism for accountability.
This is where AI becomes not just a mirror, but an amplifier of cultural dynamics.
If accountability is weak, it becomes weaker at scale. If ownership is unclear, it becomes more fragmented. And if responsibility is avoided, the consequences become harder to contain.
When These Fractures Connect
Individually, each of these issues is manageable. A lack of priority can be addressed through leadership alignment. Broken processes can be redesigned. Data quality can be improved. Ownership can be clarified.
The real risk emerges when they are combined.
When AI is introduced into an environment where transformation is not truly prioritized, processes are flawed, data is unreliable, and ownership is unclear, the result is not incremental improvement.
It is compounded failure.
AI accelerates usage without depth. It amplifies outputs based on weak inputs. It distributes responsibility across systems that no one fully controls. And it does all of this at a speed that makes it difficult to detect and correct issues before they propagate. This is explored further in the core concept of Trust Tax.
What used to be small, contained problems become large, interconnected ones.
The organization does not just struggle. It scales its struggles.
The Illusion of Progress
One of the most dangerous aspects of this dynamic is that it often looks like progress.
More output is generated. More insights are produced. More decisions are made. Dashboards are populated, reports are created, and workflows appear to move faster. From a distance, the organization looks more productive, more advanced, and more aligned with the future.
But activity is not the same as value.
If the underlying system is flawed, increasing the volume of output does not improve the outcome. It simply creates more of what was already there. And because the outputs are polished, structured, and delivered quickly, they create a false sense of confidence.
This is where external examples start to matter. Cases like Deloitte’s well-publicized issues with the improper use of AI in Australia illustrate that even highly capable organizations are not immune to this pattern. When the underlying processes and controls are not aligned with the capabilities of the technology, the consequences can extend beyond inefficiency into real-world impact.
AI does not create those risks.
When there is a push to use AI to scale a system not built on solid foundations, it exposes them.
What a Clean Organization Looks Like
If AI is not the solution, what is?
The answer is less about technology and more about discipline.
A clean organization is one where priorities are clear and enforced. Where transformation is not treated as an optional initiative, but as a fundamental shift in how the company operates. Where processes are designed with intention, and data is treated as a critical asset rather than an afterthought.
It is also an organization where ownership is explicit. Where people are not punished for taking responsibility, but rewarded for it. Where accountability is not avoided, but embraced as a mechanism for improvement.
In that environment, AI becomes a true multiplier, because there is something worth multiplying.
A clean organization scales its strengths with AI.
A broken one scales its problems.
When Visibility Becomes Unavoidable
The most important consequence of all of this is visibility.
Before AI, many organizational flaws could be hidden behind complexity, delay, and limited transparency. Problems existed, but they were difficult to isolate, and even harder to prove. That allowed organizations to operate in a state of partial awareness, where issues were known, but not fully confronted.
AI changes that.
By increasing speed, reducing friction, and amplifying output, it removes many of the buffers that used to obscure reality. It makes patterns more visible, both good and bad. It highlights inconsistencies, gaps, and misalignments that were previously buried in the noise of day-to-day operations.
And once those patterns are visible, they are difficult to ignore.
AI is not a shortcut to better performance.
It is a stress test for everything underneath it.
And for many organizations, it is the first time they are seeing the full picture of how they actually operate.
Not how they think they operate. Not how they present themselves. But how they truly function.
That is not a problem to be solved with more AI.
It is a reality to be addressed with better thinking, better systems, and better accountability.






