There is a quiet assumption sitting underneath much of the current enthusiasm around generative AI, and it is rarely questioned because it feels so intuitive that it almost sounds like common sense. The assumption is that AI will make us better. Better writers, better thinkers, better strategists, better professionals. It promises a kind of acceleration that seems to collapse the gap between where we are and where we would like to be, and for a moment, it feels like the most democratic upgrade to human capability we have ever seen.
But that assumption about Cognitive Leverage is flawed in a very specific and very important way.
There is no guarantee that AI will make you better.
There is, however, an absolute certainty that it will make you visible.
And once you understand that, you start to see a pattern that is becoming increasingly difficult to ignore. The same system that can elevate someone operating at a high level into something truly exceptional will just as reliably expose someone who is trying to operate above their actual depth. Not because it fails, but because it does exactly what it is supposed to do.
AI is not a shortcut to cognitive leverage. It is a mirror and a multiplier.
It reflects what is already there, and then it amplifies it. It’s not too much unlike Captain America and the Red Skull, both of whom were enhanced by the same Super Soldier Serum.
The Misconception Behind Cognitive Leverage
Cognitive leverage is often misunderstood as a capability upgrade. It is tempting to believe that having access to a powerful model automatically raises the level at which you can operate, that it fills in the gaps in your thinking, compensates for your blind spots, and somehow turns rough ideas into refined outcomes without requiring the underlying work.
That is not what happens. Not yet, at least.
What actually happens is more subtle and more unforgiving.
If your thinking is clear, structured, and grounded in real understanding, AI will amplify that clarity. It will help you articulate ideas more precisely, explore them more deeply, and connect them more effectively. It becomes an extension of your best thinking, allowing you to move faster without sacrificing quality.
If your thinking is vague, inconsistent, or built on shaky assumptions, AI will do exactly the same thing. It will take that vagueness and make it sound convincing. It will take those inconsistencies and smooth them out just enough that they become harder to detect at first glance. It will turn shallow ideas into well-structured paragraphs that look impressive until someone with actual expertise takes a closer look.
In both cases, the system is working perfectly.
The difference is not in the AI. The difference is in the human in the loop: you.
This is where the idea of AI as a mirror becomes critical. It does not judge, it does not correct unless explicitly instructed to do so, and it does not question the premise unless prompted. It takes what you give it, interprets it through patterns it has learned, and produces an output that is statistically aligned with what you asked for.
If what you asked for is grounded in real understanding, the result will feel insightful.
If what you asked for is not, the result will feel convincing but hollow.
This is exactly where your previous work on the Interface Tax becomes relevant. When friction disappears, the system stops hiding weaknesses. It stops slowing you down just enough to mask gaps in thinking. Instead, it accelerates everything, including the parts that were previously obscured by process and effort.
The Rise of Synthetic Authority
One of the most obvious places where this dynamic plays out is in what could be described as synthetic authority. People suddenly showing up with polished perspectives, structured arguments, and confident language in areas where they have not demonstrated any prior depth or curiosity.
At first glance, this can be impressive. The language is clean, the structure is logical, and the conclusions seem well-formed. But there is something missing, and it becomes apparent the moment you engage with it beyond the surface.
There is no friction.
There are no edges, no uncertainty, no genuine exploration of the topic. There is no sense that the ideas have been wrestled with, challenged, or refined through real experience. Instead, what you get is a perfectly assembled version of what someone thinks the answer should sound like.
This is where AI exposes rather than elevates.
Because the moment the conversation moves beyond prepared statements into actual dialogue, the gap becomes obvious. Questions that require depth are met with generalities. Challenges are deflected with rephrasing rather than engagement. The ability to generate content is there, but the ability to think through it in real time is not.
AI did not create that gap. It revealed it. And it revealed it faster than any previous tool could have.
This dynamic is closely tied to what behavioral economics would describe as incentive-driven behavior. When the reward system favors visibility over substance, tools that increase output will naturally amplify that tendency. A clear example of that becoming evident is Linkedin changing how its algorithm rewards content. Unlike Linkedin, AI does not change the incentive. It accelerates the behavior that the incentive produces.
When Preparation Replaces Curiosity
A similar pattern emerges in more interactive settings, particularly in situations that traditionally relied on curiosity, exploration, and the exchange of ideas. Conversations, workshops, collaborative problem-solving sessions, the kinds of environments where value is created not just through knowledge, but through the interaction between people.
There is a growing tendency to approach these situations with what feels like an overwhelming level of preparation. Perfectly structured talking points, anticipated objections, pre-assembled insights, all supported by AI-generated material that gives the impression of depth and readiness.
On the surface, this looks like a step forward. Who would argue against being better prepared?
But something important gets lost in the process.
Curiosity.
When one side of a conversation is overly optimized for output, the dynamic changes. Instead of exploring ideas together, the interaction becomes a one-sided delivery of pre-constructed answers. Instead of building on each other’s thoughts, one person attempts to outpace the other with speed and volume.
This is where AI as a multiplier becomes problematic.
Because it does not just amplify your knowledge. It amplifies your intent.
If your intent is to engage, to explore, and to genuinely understand, AI can enrich that interaction. It can help you bring in relevant context, connect ideas, and respond more thoughtfully.
If your intent is to dominate, to appear more knowledgeable than you actually are, or to make the other person irrelevant, AI will amplify that as well.
And the result is often the opposite of what was intended.
Instead of elevating the conversation, it shuts it down.
Instead of demonstrating capability, it exposes insecurity.
Because real expertise does not need to overpower a conversation. It knows when to ask questions, when to listen, and when to build on what is being said.
AI cannot compensate for the absence of that instinct.
It can only make it more visible.
The Compounding Effect of Shallow Inputs
The most dangerous aspect of this dynamic is not that AI can produce shallow outputs. That has always been possible with or without technology. The real issue is that AI introduces a compounding effect.
When someone operates without sufficient depth, the errors are not just preserved. They are amplified.
A weak understanding leads to a poorly framed prompt. A poorly framed prompt leads to an output that sounds coherent but is based on flawed assumptions. That output is then used as the basis for further work, further prompts, further conclusions.
At each step, the system builds on what came before, reinforcing the original weakness while making it harder to detect.
This is how you end up with outputs that look increasingly sophisticated while drifting further away from what actually matters.
It is not a failure of the technology.
It is a failure of the input.
And because the output is polished, the feedback loop becomes dangerous. It creates a false sense of confidence, a belief that the system is working in your favor, when in reality, it is simply accelerating you in the wrong direction.
This is the part that many people underestimate.
AI does not just reflect and amplify. It compounds.
Which means that small gaps in understanding can turn into significant deviations over time.
Let AI Make You More Awesome. Not Someone Else’s Awesome
There is a simple way to think about all of this, even if it is not always comfortable.
Let AI make you more awesome.
But do not use it to become someone else’s version of awesome.
The temptation to imitate is strong, especially when the tools make it so easy. You can replicate tone, structure, and style with a level of precision that would have been impossible before. You can produce outputs that resemble the work of people who have spent years developing their voice and their thinking.
But resemblance is not the same as substance. And the more you rely on imitation, the more you distance yourself from the one thing AI cannot generate for you. Your own perspective.
That perspective is built over time, through experience, through failure, through curiosity, and through the willingness to engage with problems even when the answers are not immediately clear. It is shaped by the specific combination of things you have seen, done, and understood.
AI can help you articulate that. It cannot replace it. And when you try to use it as a substitute, it shows. Not always immediately, but inevitably.
Because sooner or later, you will be asked to go beyond the script. To explain, to defend, to adapt, to think in real time. And in that moment, the difference between something that is yours and something that was assembled for you becomes impossible to hide.
The Real Opportunity
All of this might sound like a critique, but it is actually an opportunity.
Because if AI is a mirror and a multiplier, then it can also be one of the most powerful tools for improvement we have ever had.
It can show you where your thinking is unclear. It can highlight gaps in your knowledge. It can surface inconsistencies that you might not have noticed otherwise. It can challenge you to refine your ideas, to ask better questions, and to engage more deeply with the topics you care about.
But only if you use it that way. Only if you are willing to see what it reflects. Only if you are willing to treat the output not as a finished product, but as a starting point for further thinking.
That requires a shift in mindset.
From using AI to produce » To using AI to think.
From using AI to impress » To using AI to improve.
That is where cognitive leverage actually lives, and it connects directly to your broader perspective on AI collaboration in companies, where the same principles scale from individuals to systems.
Where This Leads
If you look at the broader picture, this dynamic extends beyond individuals.
The same principles apply at the organizational level, where AI does not just amplify individual behavior, but also exposes the strengths and weaknesses of entire systems. Processes that are inefficient become more obviously inefficient. Gaps in knowledge become more visible. Misalignments between teams become harder to ignore.
What we are seeing at the individual level is just the beginning.
Because when AI starts operating across workflows, systems, and teams, the compounding effect becomes even more pronounced. Small issues do not just stay small. They scale.
Which raises a different question: If AI is already exposing what is missing at the individual level… What happens when it starts doing the same for entire organizations?
That is the question that defines the next phase of the Cognitive Revolution.






