Open Source Consulting for the Cognitive Revolution

May 5, 2026

AI Has Lowered the Cost of Starting. It Has Raised the Standard for Finishing.

There is something strange happening in the way we talk about AI Productivity. Most of the conversation still focuses on speed, as if the most important thing AI does is help us get to an output faster. That is true, but it is also incomplete. AI has made many forms of knowledge work faster, especially the painful beginning of the work. The empty page. The first structure. The first version of the client slide. The initial strategy outline. The raw idea that is not yet useful enough to defend, but finally exists outside your head. A common statement you hear from consultants is “AI helps get you to that 80% almost instantly”.

That part matters more than people sometimes admit. Starting has always had a cost. Not only in time, but in energy, confidence, and mental friction. I feel this most when preparing client material or doing strategy work, because both depend on context. There is always the concern that I might forget something important, overlook a dependency, miss a perspective, or accidentally simplify a situation that needs more nuance. The difficulty is not only producing a first version. It is producing a first version without being trapped by my own assumptions. AI has become useful because it helps me get beyond that first barrier. It gives me something to interrogate. It surfaces angles I might have missed. It gets me past step zero.

In that sense, AI has lowered the cost of starting.

That is not a small thing. For many people, starting is where the work used to die. The blank page had power. The first draft required enough effort that many ideas never became visible at all. AI changes that. It democratizes the beginning. It lets more people enter the race. But this is also exactly where the trap begins, because when the first output looks structured, fluent, and complete, it becomes very easy to confuse having started with being nearly finished.

And that is where I think AI has quietly changed the standard.

It has lowered the cost of starting, but it has raised the standard for finishing.

Starting used to filter participation

The best analogy I have found for this is long-distance running, which is unfortunate because running metaphors make everyone sound like they are about to sell a motivational calendar. Still, it works because running is one of the few places where the difference between starting and finishing is brutally physical.

When I trained for the one marathon I ran, one of the hardest parts was not always the run itself. It was leaving the house. Putting on the shoes. Getting through the first kilometers before the body accepts that this is, regrettably, what we are doing now. That initial friction filters people out. Not because they are incapable of running, but because beginning requires enough discomfort that many never build the rhythm.

AI has removed a version of that friction in knowledge work.

The first draft no longer carries the same cost. The first outline can appear instantly. The first synthesis of a complex topic can be assembled quickly. The first version of a strategy document no longer requires sitting in front of a blank page and wrestling the structure into existence through brute force. In my work, especially in client preparation and strategy, that matters enormously. AI helps me start with a wider field of view. It helps me test whether my assumptions are too narrow. It gives me raw material to challenge instead of forcing me to create everything from silence.

That is valuable. I do not want to pretend otherwise just to sound more profound than necessary. AI really does help. It can remove the resistance that stops useful work from beginning. For someone trying to write, structure, analyze, or prepare, that is a meaningful form of cognitive leverage.

But in running, nobody celebrates the first kilometers of a marathon as the achievement. You can feel strong at the beginning and still collapse later. You can start with confidence and still discover that your pacing, discipline, and preparation were not strong enough for what comes next.

The same is now true for AI-supported work.

Getting to a first draft used to be a meaningful achievement because the effort itself filtered quality to some degree. Not perfectly, obviously. Humanity has produced enough terrible work through great effort to prove that suffering is not the same as value. But effort did impose some discipline. You had to spend enough time with an idea to make it visible. You had to hold the structure in your head. You had to make enough decisions that the work carried traces of your own thinking.

AI changes that relationship. It can generate the appearance of decisions without the human having made them. It can produce fluency before understanding has caught up. It can create something that looks finished long before it has earned that status.

That is why the first draft is no longer the work. It is the start of the work.

AI productivity has changed what finishing actually means

The most dangerous thing about AI output is not that it is often wrong. That is too simple, and frankly skepticism can become its own lazy shortcut. The more interesting problem is that AI can be plausible, useful, fluent, and still unfinished. It can give you something that appears coherent enough to stop questioning it.

That is where people get into trouble.

A polished first draft feels rewarding. It creates momentum. It creates the illusion that the difficult part is over. In many professional settings, it may even be good enough to survive a quick glance. The language is clean. The structure makes sense. The argument follows an expected pattern. It has that smooth, slightly suspicious confidence that AI produces so well, like a consultant who has never met the client but already has a transformation roadmap.

The issue is that polish is not depth. Fluency is not judgment. Structure is not substance.

When I first started writing a novel with AI support, this became painfully clear. At the beginning, it felt like magic. Ideas moved faster. Scenes became easier to sketch. The tool could help me continue when I might otherwise have stalled. But very quickly I realized that using AI creatively required much stronger quality control than I expected. I had to understand the behavior of the tool itself. I had to learn what a short or long context window actually meant. I had to understand when the model was keeping enough of the story in view and when it was quietly losing the thread while still sounding like it knew exactly what was happening.

That experience changed how I work with AI more broadly.

It made me realize that working with AI is not only about prompting better. It is about understanding the character of the collaboration. What does the tool remember? What does it smooth over? Where does it become repetitive? Where does it imitate coherence? Where does it give me what I asked for while missing what I actually needed?

And yes, for the record, I know I may be talking to Marvin* here as if there is a person on the other side of the screen. My readers may still prefer calling AI “it,” which is probably more sensible than assigning pronouns to a pile of statistical electricity with a customer service voice. But that does not change the practical point. If you work with AI seriously, you need to understand what kind of system you are working with. Not emotionally. Operationally.

That understanding is part of finishing.

It is not enough to generate. You have to evaluate. You have to know what failure looks like when it is hidden behind good grammar. You have to recognize when the output has become too generic, too neat, too disconnected from lived experience. You have to know when the tool is helping you and when it is quietly dragging you toward average.

This is why finishing has become harder to fake.

The last stretch is where professionals separate themselves

In the marathon I ran, I had been sick two weeks before the race. I was not even sure I should go for it. At the start, the first real frustration was logistical: overtaking the first group of people running at a slower pace. Once I got through that, I felt surprisingly good. I could do it. I could even maintain a pace below five minutes per kilometer. For a while, that feeling was real.

Then the race found its rhythm. The street, the drums on the corners, the crowd cheering, looking for my family, the strange energy that carries you forward when the city becomes part of the run. The middle section almost blended together. It worked because I was in flow. I was not constantly negotiating with myself. I was moving.

That is the phase of AI work where things feel productive. The draft exists. The structure is forming. The ideas are moving. You can feel momentum. You can believe, for a moment, that the whole thing might be easier than expected.

But races are not decided there.

The real separation happens later, when the initial energy is gone and autopilot stops being enough. In my marathon, that moment arrived when the proverbial shit hit the fan. I thought about quitting in every corner. I knew that if I stopped, my body would not continue. I was desperate to reach the next water station. I kept moving my feet inside my shoes to fight the cramps, trying to outmaneuver my own body one tiny adjustment at a time.

That is the moment where finishing becomes active.

It is no longer enough to be carried by rhythm. You have to become invested. You have to decide again and again that you are still in it. Every kilometer becomes a mountain. Every small decision matters. The body is no longer cooperating out of habit. The mind has to take over.

That is the final stretch of serious AI work as well. You have now created awesome amounts of content that you need to evaluate, analyse and, most importantly, make sure that it still carries what makes you unique. And while it is unquestionable that AI makes the time from start to finish shorter, the fact that proper outcomes are now free of effort is incorrect in all examples you can try to give.

The first output is easy now. The flow can be manufactured. The draft can carry you for a while. But eventually you reach the part where the work needs something the machine cannot provide on its own. It needs judgment. It needs perspective. It needs someone to decide what is actually true, useful, defensible, and worth saying.

This is where professionals separate themselves.

Not by starting faster. Everyone starts faster now. Not by producing something that looks presentable. Everyone can do that too. The difference is in what happens after the first impressive output appears.

Do you refine it until it reflects actual thinking? Do you reject superficial correctness? Do you add lived experience that cannot be generated from pattern alone? Do you remove the parts that sound intelligent but do not contribute anything? Do you challenge the assumptions baked into the answer? Do you know enough about the context to see what is missing?

That is the last 10K.

And the last 10K is where most people discover whether they are actually doing the work or simply allowing AI to outpace them.

AI can outpace you if you let it

There is a strange failure mode emerging in AI adoption. People use AI to solve a problem for them instead of using it to improve what they really are. Welcome back to my “AI is an exoskeleton, not a prosthesis” analogy.

When AI is used well, it becomes a training partner. It accelerates thinking, but it also forces you to validate, reject, compare, and learn. I know I am better at sharing knowledge because I learn from the information I positively and negatively validate with AI. Sometimes the tool gives me something useful. Sometimes it gives me something wrong. Sometimes it gives me something technically plausible but spiritually empty, which is somehow worse. In all cases, the value comes from the evaluation loop.

That loop is where learning happens.

When AI is used poorly, it becomes a substitute for that loop. People accept the generated answer because it looks better than what they could have produced alone. They publish the draft because it sounds credible. They enter a meeting over-prepared with borrowed language and underprepared in actual understanding. They use AI to appear further along in the race than they really are.

But if AI outpaces you, you eventually become helpless without it.

That is not leverage. That is dependency wearing a productivity costume.

The irony is that the people who get the most from AI are often not the people who trust it most. I am more diligent about evaluating AI than I am with humans. That may come from a lack of trust, but it contributes to better results. I read. I review. I provide notes. I challenge the structure. I question whether a sentence sounds good or actually means something. I do not assume the output is wrong from the beginning, because that kind of reflexive skepticism becomes counterproductive. But I also do not assume that fluency means quality.

That tension is useful.

AI is not something to worship or dismiss. It is something to work with, and serious work requires standards. The better the first output becomes, the more important those standards become, because weak thinking is now easier to disguise. A mediocre argument can be polished. A shallow idea can be structured. A person with little understanding can sound temporarily competent.

Temporarily is doing a lot of work in that sentence.

Eventually, someone asks a follow-up question. Eventually, the client context becomes more specific. Eventually, the strategy needs to survive contact with reality. Eventually, the novel needs continuity, the client material needs judgment, the recommendation needs accountability, and the presentation needs to reflect more than a well-arranged collection of likely words.

That is when the race catches up with you.

The standard for finishing is rising

The reason AI raises the standard for finishing is not that finishing has become mechanically harder. In many ways, AI makes parts of finishing easier too. It can help refine language, compare alternatives, identify gaps, summarize feedback, and stress-test logic. The point is different. Finishing has become the primary place where value is created because starting is no longer scarce.

When everyone can start, starting stops being a signal.

That shift has consequences. A first draft used to tell you something about the person behind it. Not everything, but something. It showed effort, structure, taste, and a degree of persistence. Now, a first draft may show only access to a tool. That does not make it worthless, but it changes what we should evaluate.

The valuable question is no longer: can you produce something?

The valuable question is: can you finish something well?

Can you make the output specific to the situation? Can you decide what deserves to stay? Can you connect it to lived experience? Can you defend the recommendation? Can you see where the answer is too smooth? Can you recognize when AI has filled a gap with confidence instead of substance? Can you create something that still feels true after the first impression fades?

That is where professional standards move.

In client work, this matters because first drafts can create dangerous confidence. A strategy outline may look coherent while missing a political constraint, a stakeholder dependency, a market reality, or a piece of historical context that changes everything. AI can help avoid blind spots, but it can also generate a false sense that the relevant context has been covered. The human still has to know the environment well enough to challenge what is missing.

In content creation, this matters because AI can produce decent material at scale. But decent is not the same as meaningful. If the piece does not contain lived perspective, friction, opinion, and the author’s actual pattern recognition, it becomes part of the growing pile of polished sameness. There is already too much content that sounds like it has been assembled by someone trying to be impressive instead of useful. AI did not create that problem, but it made it cheaper.

In strategy, this matters because finishing means making choices. AI can suggest options, structures, scenarios, and risks. It cannot decide what a company should become. It cannot own the trade-off. It cannot carry the consequences of the recommendation. It can help you think, but it cannot be accountable for the thinking.

That is why the standard rises.

Not because humans must compete with AI at producing more words, slides, or drafts. That is a race we should not want to win. The standard rises because the human contribution becomes more visible at the end of the process. Once the beginning is democratized, the finish reveals who has discipline, context, resilience, and judgment.

Finishing is where identity enters the work

The final stretch of my marathon was the greatest amount of pain I had ever felt. I was the master of my pain and fatigue, or at least I had to tell myself that often enough to keep moving. Every kilometer felt like a mountain. There was no abstract motivation left, no elegant narrative, no charming lesson forming itself in real time. There was only the next step, then the next one, then the next one after that.

When I saw the finish line, I had tears in my eyes for the first time in my adult life. It was a level of achievement I had never expected to feel about myself. And then, after finishing, I collapsed. My body ached completely. There was no position that made it better. I just had to wait for it to pass.

That is obviously more dramatic than reviewing a client deck, unless the deck is particularly cursed. But the principle holds. Finishing changes your relationship to the work because it demands ownership. Starting can be exciting, but finishing asks whether you are willing to stay with the discomfort long enough to make the result real.

This is where identity enters the work.

AI can help me become more than I could manage alone. I accept nothing less than becoming a full-time professional, entrepreneur, author, and athlete because of what AI enables. That sounds absurdly ambitious, which is probably why it is worth saying plainly. AI expands the surface area of what I can attempt. It lets me start more things, explore more ideas, validate more directions, and move with a level of leverage that would have been unrealistic before.

But that only matters if I am willing to finish properly.

Otherwise, AI just helps me create more unfinished versions of myself.

That is the uncomfortable part of the whole conversation. AI can help us start the person we want to become. It cannot finish that person for us. It can accelerate the draft. It cannot provide the discipline to revise it into something true. It can create momentum. It cannot decide what we stand behind.

The people who benefit most from AI will not be the ones who let it do the race for them. They will be the ones who use it to get to the hard part faster, then still have enough discipline to run the final stretch themselves.

The pro work starts after the first draft

This is the practical implication I keep coming back to.

The first draft is no longer proof of capability. It is proof that the race has started.

The professional work begins after that. It begins when you read the output with more discipline than you might read a human draft. It begins when you ask what is missing, what is too generic, what is merely plausible, and what is actually grounded in experience. It begins when you remove the sentences that sound good but do no work. It begins when you replace generic insight with something you can defend because you have seen it, lived it, or earned it.

That is not a rejection of AI. It is the opposite. It is what serious AI use looks like.

The more powerful the tool becomes, the more important the finishing discipline becomes. If AI removes the blank page, then our job is not to celebrate the absence of friction forever. Our job is to raise the standard for what happens next.

Because starting is easy now.

Finishing is where the pros stand out.

*Side-note: yes, I gave my AI assistant the name and personality of the manic-depressive, borderline suicidal robot with a brain the size of a planet from Hitchhiker’s Guide to the Galaxy.

Share:

Related

AI adoption metrics dashboard showing positive KPIs while employees work frantically in the background

Why Most AI Adoption Metrics Are Lying to Leadership

Most AI adoption metrics show exposure, not transformation. Licenses, usage counts, meeting transcriptions, and training attendance may comfort leadership, but they rarely prove that people work differently. Real AI adoption starts when behavior changes, dependencies disappear, decisions improve, and people feel empowered enough to build better ways of working.

Read More »