Fail Fast, Fail Cheap

Why admitting you’re wrong early is a leadership capability, not a weakness

BlackBerry famously asked:

“Who would ever want a phone without a keyboard?”

It wasn’t a dumb question. It was a reasonable one, grounded in everything that had worked before. BlackBerry dominated enterprise communication. Physical keyboards were a genuine advantage for email-heavy workflows. The problem wasn’t the question. It was how long the answer was defended, even as the context around it changed.

That distinction matters, because most strategic failures don’t start with stupidity. They start with reasonable assumptions that quietly outlive their usefulness.

Why this matters even more with AI

All of this becomes more important, not less, when dealing with technologies like AI.

AI is volatile, fast-moving, and poorly understood in its second- and third-order effects. Models improve rapidly. Use cases evolve. What looks promising today can become obsolete within months. In such an environment, long-term certainty is an illusion.

The biggest risk with AI is not experimenting in the wrong direction once. The biggest risk is committing too early, scaling too far, and then defending an approach because too much has already been invested.

History suggests a simple rule: the earlier you admit you’re wrong, the cheaper it is to recover.

A practical checklist: are you learning, or are you defending?

Leaders often believe they are “learning” while they are, in reality, defending earlier decisions. With AI initiatives, the difference is subtle but measurable. The following questions help make that distinction explicit:

  • Are we still testing core assumptions, or only optimizing implementation?
  • Can we clearly articulate what would make us stop or significantly change direction?
  • Are negative signals being surfaced early, or explained away as “temporary noise”?
  • Are success metrics evolving based on learning, or fixed to justify prior investment?
  • Is the team rewarded for invalidating weak ideas, or only for delivering on approved ones?
  • Do we regularly revisit whether this problem is still worth solving with AI at all?

If these questions feel uncomfortable, that’s usually a signal, not a problem.

Failing fast doesn’t mean acting recklessly. It means creating enough honesty in the system that weak assumptions die early, before they become expensive.

How this ties back to the earlier posts

This post is not an isolated argument. It’s a continuation.

  • In the first post, I argued that AI strategy should emerge from experimentation, not from upfront certainty. Failing fast is what makes that possible.
  • In the second post, the focus was on problem-first thinking. Admitting you’re wrong early often means realizing you were solving the wrong problem.
  • The posts on leadership habits and environment highlighted that people won’t experiment honestly if failure admission is punished.
  • The exoskeleton metaphor reframed AI as leverage, not replacement. Leverage amplifies both good and bad decisions, which makes early correction even more critical.
  • And the discussion on incentives showed why organizations often defend the wrong direction: because the system quietly rewards persistence over learning.

Seen together, a pattern emerges.

AI doesn’t demand better predictions.

It demands better correction mechanisms.

Failure admission as a strategic asset

The uncomfortable truth is this: organizations don’t lose because they fail. They lose because they fail expensively. Leaders who outperform over time are not the ones who avoid wrong calls altogether. They are the ones who recognize wrong calls early, correct them decisively, and move on without trying to save face.

Failure admission is not a personality trait. It’s a capability. One that must be designed into how strategy is formed, reviewed, and adjusted.

BlackBerry wasn’t wrong to ask their question.

Google wasn’t brilliant because they predicted the future.

The difference was timing.

Share:

More Posts