In many organizations, the hardest part of tackling something new isn’t the challenge itself. It’s unlearning what used to work.
I’ve seen leaders genuinely excited to take on new problems, new markets, or new technologies, while still relying on habits, operating models, and incentives that were shaped in very different environments.
That tension shows up quickly.
Success Has Gravity
Past success creates gravity. It pulls leaders back toward familiar ways of deciding, governing, and measuring progress.
This becomes especially visible when leaders move from large, stable organizations into smaller or faster-moving ones. Behaviors that once made perfect sense at scale can quietly become constraints:
decision-making optimized for risk avoidance instead of speed incentive models that reward predictability while expecting change governance structures designed for control, not learning
None of this is malicious. It’s human. Experience is valuable precisely because it worked before. The challenge is recognizing when the environment has changed enough that old reflexes start doing harm.
Why AI Amplifies the Problem
AI doesn’t create these tensions. It exposes them.
AI introduces new possibilities, but it also acts as a stress test for organizational alignment. Because the technology is broad and flexible, it surfaces questions that organizations can often postpone with more traditional initiatives:
Who is actually allowed to experiment? Where are decisions made, and how quickly? What gets rewarded when outcomes are uncertain? How much autonomy do teams really have?
When these questions aren’t answered consistently, AI initiatives tend to produce a lot of activity with very little impact.
Alignment Is Not a Slogan
This is where alignment becomes more than a leadership buzzword.
AI experimentation, business objectives, leadership behavior, and incentive mechanisms all need to point in the same direction. Miss one, and progress stalls.
I often see OKRs play a critical role here, not because they magically create alignment, but because they make misalignment visible.
Used poorly, OKRs turn into another layer of reporting. Used well, they become a coordination mechanism. The difference lies in how they’re applied.
OKRs as a Lever for Collective Intelligence
The most effective use of OKRs I’ve seen in AI contexts is not prescribing use cases top-down, but aligning on outcomes and trusting teams to find the best path forward.
When teams understand: what success looks like why it matters and how it connects to broader goals they’re far more capable of discovering where AI actually helps in their day-to-day work.
This approach leverages the competence of the entire organization instead of centralizing intelligence in a small group of experts. It also increases ownership, learning speed, and ultimately adoption.
AI works best when it amplifies existing judgment, not when it tries to replace it.
Leadership Is the Real Constraint
None of this is primarily a technology challenge.
AI strategy fails most often when leaders underestimate how much they themselves need to adapt: how they lead, how they decide, and how they measure success.
New tools make new behaviors necessary. Old habits don’t disappear on their own.
The organizations that make real progress with AI are usually not the ones with the best models or platforms. They’re the ones where leaders are willing to let go of familiar playbooks and create space for new ways of working to emerge.
That’s the real work behind AI strategy.


