When Old Habits Get in the Way of New Challenges

In many organizations, the hardest part of tackling something new isn’t the challenge itself. It’s unlearning what used to work.

I’ve seen leaders genuinely excited to take on new problems, new markets, or new technologies, while still relying on habits, operating models, and incentives that were shaped in very different environments.

That tension shows up quickly.

Success Has Gravity

Past success creates gravity. It pulls leaders back toward familiar ways of deciding, governing, and measuring progress.

This becomes especially visible when leaders move from large, stable organizations into smaller or faster-moving ones. Behaviors that once made perfect sense at scale can quietly become constraints:

decision-making optimized for risk avoidance instead of speed incentive models that reward predictability while expecting change governance structures designed for control, not learning

None of this is malicious. It’s human. Experience is valuable precisely because it worked before. The challenge is recognizing when the environment has changed enough that old reflexes start doing harm.

Why AI Amplifies the Problem

AI doesn’t create these tensions. It exposes them.

AI introduces new possibilities, but it also acts as a stress test for organizational alignment. Because the technology is broad and flexible, it surfaces questions that organizations can often postpone with more traditional initiatives:

Who is actually allowed to experiment? Where are decisions made, and how quickly? What gets rewarded when outcomes are uncertain? How much autonomy do teams really have?

When these questions aren’t answered consistently, AI initiatives tend to produce a lot of activity with very little impact.

Alignment Is Not a Slogan

This is where alignment becomes more than a leadership buzzword.

AI experimentation, business objectives, leadership behavior, and incentive mechanisms all need to point in the same direction. Miss one, and progress stalls.

I often see OKRs play a critical role here, not because they magically create alignment, but because they make misalignment visible.

Used poorly, OKRs turn into another layer of reporting. Used well, they become a coordination mechanism. The difference lies in how they’re applied.

OKRs as a Lever for Collective Intelligence

The most effective use of OKRs I’ve seen in AI contexts is not prescribing use cases top-down, but aligning on outcomes and trusting teams to find the best path forward.

When teams understand: what success looks like why it matters and how it connects to broader goals they’re far more capable of discovering where AI actually helps in their day-to-day work.

This approach leverages the competence of the entire organization instead of centralizing intelligence in a small group of experts. It also increases ownership, learning speed, and ultimately adoption.

AI works best when it amplifies existing judgment, not when it tries to replace it.

Leadership Is the Real Constraint

None of this is primarily a technology challenge.

AI strategy fails most often when leaders underestimate how much they themselves need to adapt: how they lead, how they decide, and how they measure success.

New tools make new behaviors necessary. Old habits don’t disappear on their own.

The organizations that make real progress with AI are usually not the ones with the best models or platforms. They’re the ones where leaders are willing to let go of familiar playbooks and create space for new ways of working to emerge.

That’s the real work behind AI strategy.

Share:

More Posts

Eliminating Latency Between Thought and Execution

What if productivity isn’t about working more hours, but about eliminating the delay between having a valuable thought and acting on it? I’ve made high‑stakes decisions from a rest stop in the Alps and delivered spontaneous presentations from an iPhone — not to prove a point, but because the infrastructure allowed it. The real question is: how much innovation does your organization lose to invisible latency?

Why Most AI Strategies Are Built Backwards

Most AI strategies don’t fail because they are wrong. They fail because everyone involved is right. IT wants control. Legal wants safety. Leadership wants an edge. And while alignment is negotiated, momentum evaporates. AI does not reward perfect planning. It rewards lived leverage. Until someone experiences real cognitive relief, strategy remains theatre.

Make the Impossible Possible

When creating a structured video becomes easier than drafting a long email, behavior changes. Video stops being a department and becomes a capability. AI tools like Synthesia don’t just improve communication: they lower the cost of expression to the point where “impossible” becomes routine. And once someone experiences that shift, the question is no longer whether it’s human enough. It’s whether you’re willing to let competitors normalize a capability you’re still debating.

AI is the First Cognitive Revolution

AI is not steam power for the mind. It is something fundamentally different: a tool that amplifies cognition before it replaces it. Unlike past revolutions that automated muscle and logistics, AI enhances judgment, synthesis, and context handling — for now. Whether this becomes a mass replacement event or a leap toward collective wellbeing depends less on the models and more on how we choose to adopt them.