Open Source Consulting for the Cognitive Revolution

Businessman discarding outdated project version into trash bin labeled "Scrap It".

Why admitting you’re wrong early is a leadership capability, not a weakness

BlackBerry famously asked:

“Who would ever want a phone without a keyboard?”

It wasn’t a dumb question. It was a reasonable one, grounded in everything that had worked before. BlackBerry dominated enterprise communication. Physical keyboards were a genuine advantage for email-heavy workflows. The problem wasn’t the question. It was how long the answer was defended, even as the context around it changed.

That distinction matters, because most strategic failures don’t start with stupidity. They start with reasonable assumptions that quietly outlive their usefulness.

Why this matters even more with AI

All of this becomes more important, not less, when dealing with technologies like AI.

AI is volatile, fast-moving, and poorly understood in its second- and third-order effects. Models improve rapidly. Use cases evolve. What looks promising today can become obsolete within months. In such an environment, long-term certainty is an illusion.

The biggest risk with AI is not experimenting in the wrong direction once. The biggest risk is committing too early, scaling too far, and then defending an approach because too much has already been invested.

History suggests a simple rule: the earlier you admit you’re wrong, the cheaper it is to recover.

A practical checklist: are you learning, or are you defending?

Leaders often believe they are “learning” while they are, in reality, defending earlier decisions. With AI initiatives, the difference is subtle but measurable. The following questions help make that distinction explicit:

  • Are we still testing core assumptions, or only optimizing implementation?
  • Can we clearly articulate what would make us stop or significantly change direction?
  • Are negative signals being surfaced early, or explained away as “temporary noise”?
  • Are success metrics evolving based on learning, or fixed to justify prior investment?
  • Is the team rewarded for invalidating weak ideas, or only for delivering on approved ones?
  • Do we regularly revisit whether this problem is still worth solving with AI at all?

If these questions feel uncomfortable, that’s usually a signal, not a problem.

Failing fast doesn’t mean acting recklessly. It means creating enough honesty in the system that weak assumptions die early, before they become expensive.

How this ties back to the earlier posts

This post is not an isolated argument. It’s a continuation.

  • In the first post, I argued that AI strategy should emerge from experimentation, not from upfront certainty. Failing fast is what makes that possible.
  • In the second post, the focus was on problem-first thinking. Admitting you’re wrong early often means realizing you were solving the wrong problem.
  • The posts on leadership habits and environment highlighted that people won’t experiment honestly if failure admission is punished.
  • The exoskeleton metaphor reframed AI as leverage, not replacement. Leverage amplifies both good and bad decisions, which makes early correction even more critical.
  • And the discussion on incentives showed why organizations often defend the wrong direction: because the system quietly rewards persistence over learning.

Seen together, a pattern emerges.

AI doesn’t demand better predictions.

It demands better correction mechanisms.

Failure admission as a strategic asset

The uncomfortable truth is this: organizations don’t lose because they fail. They lose because they fail expensively. Leaders who outperform over time are not the ones who avoid wrong calls altogether. They are the ones who recognize wrong calls early, correct them decisively, and move on without trying to save face.

Failure admission is not a personality trait. It’s a capability. One that must be designed into how strategy is formed, reviewed, and adjusted.

BlackBerry wasn’t wrong to ask their question.

Google wasn’t brilliant because they predicted the future.

The difference was timing.

Share:

More Posts

Overcoming the misconception that hard work always shows immediate results.

The Effort Illusion

For centuries we have treated effort as proof of value. The harder something looked, the easier it was to trust. AI quietly breaks that assumption. When high-quality work suddenly becomes easier to produce, many organizations do not celebrate the leverage — they question the legitimacy of the result. The real disruption of AI may not be automation. It may be forcing organizations to confront how much they still trust effort more than judgment.

People using virtual reality headsets in a modern office environment.

The Socially Unacceptable Office

Virtual reality promises infinite screens, immersive focus, and workspaces unconstrained by desks or buildings. But even when the technology works surprisingly well, one stubborn question remains: would your workforce actually wear it?

Happy women cycling outdoors with helmets on.

Governance Follows Momentum

Organizations often believe they need governance before innovation begins. In practice, governance usually arrives after experimentation has already created momentum. The challenge is not choosing between innovation and risk management — it is creating the conditions where both can evolve together.

Man overlooking cityscape with solar panels and communication towers at sunset.

Eliminating Latency Between Thought and Execution

What if productivity isn’t about working more hours, but about eliminating the delay between having a valuable thought and acting on it? I’ve made high‑stakes decisions from a rest stop in the Alps and delivered spontaneous presentations from an iPhone — not to prove a point, but because the infrastructure allowed it. The real question is: how much innovation does your organization lose to invisible latency?