Open Source Consulting for the Cognitive Revolution

Creating the Conditions for Excellence

Why AI adoption scales like good leadership, not good mandates

“Arouse in the other person an eager want.”

— Dale Carnegie, 1936

While we spend a lot of time talking about new operating models, the dawn of AI, and the behavioral patterns of younger generations entering the workforce, it might seem odd to reach back nearly a century for guidance. And yet, the relevance is hard to ignore. As organizations become faster, more distributed, and harder to centrally control, one truth becomes increasingly obvious: mandates scale poorly, while self-motivation does not. That dynamic hasn’t changed, even if the tools, technologies, and vocabulary around it have.

AI, for all its novelty and promise, doesn’t escape this rule. In fact, it amplifies it.

Why mandates break faster in modern organizations

Most modern organizations are already operating beyond the point where direct control is effective. Decision-making has moved closer to the edges, expertise is fragmented across roles and teams, and outcomes depend far more on judgment and coordination than on simple execution. In such systems, telling people what to do rarely produces the desired result. At best, it leads to surface-level compliance. At worst, it creates quiet resistance, workarounds, and disengagement. The organization looks busy, progress is reported, and yet real value quietly leaks out elsewhere.

AI initiatives often fall into exactly this trap. The language is familiar: “We need AI.” “We must become AI-driven.” “We will roll this out across the organization.” The intent may be good, but the framing is revealing. It treats AI as something to be imposed rather than something to be adopted, and the results tend to follow the same disappointing pattern.

Why AI adoption behaves like good OKRs

Over the years, I’ve seen this dynamic play out repeatedly with OKRs. When OKRs are treated as control mechanisms, they fail. They turn into thinly disguised targets, optimized for reporting rather than contribution, disconnected from meaning and ownership. People comply because they have to, not because they believe in the outcome. The system functions, but it doesn’t learn.

When OKRs work, they do something fundamentally different. They create direction without prescribing behavior. They articulate why something matters and leave room for teams to figure out how best to contribute. Ownership emerges not because it is demanded, but because people see how their work connects to something larger. AI adoption behaves in much the same way. The question is not whether AI can be mandated. It can. The real question is whether it can be owned, and ownership only appears when people experience AI as a tool that helps them do better work, not as a requirement they need to satisfy.

Leadership as environment design

This is where leadership shifts from instruction to environment design. A useful analogy here is a professional kitchen. A great head chef doesn’t cook every dish, nor do they dictate every movement. Instead, they prepare the environment. Ingredients are ready. Tools are sharp. Friction is removed before service begins. The team still applies skill, taste, and judgment, and responsibility remains firmly with the people doing the work.

Good leadership works the same way. AI, in this analogy, is not the dish. It’s the prep work done in advance. It reduces unnecessary effort, shortens feedback loops, and frees up attention for the parts of the job that actually require human judgment. When leaders focus on creating these conditions, adoption no longer needs to be pushed. It’s pulled, because people can clearly see how it helps them succeed.

Why control kills contribution

The fastest way to kill AI adoption is to treat it as a compliance exercise. As soon as people sense that a tool is being introduced to monitor, standardize, or replace them, behavior changes in predictable ways. Curiosity disappears. Experimentation stops. People do just enough to avoid standing out. This isn’t a cultural failure or a generational issue. It’s a rational response to poorly designed systems.

The opposite dynamic is just as predictable. When people feel trusted, when direction is clear, and when they can see how a tool enables them to deliver better outcomes, adoption becomes self-reinforcing. Teams share what works. Practices spread horizontally rather than through formal rollouts. Leadership often notices only after the fact, when patterns have already formed. That’s not a loss of control. That’s what scale actually looks like in complex systems.

What actually scales

If AI is going to scale inside organizations, it won’t be because of better rollout plans or louder mandates. It will be because leaders learned how to create environments where contribution is both possible and meaningful. That requires restraint, clarity, and a willingness to let go of the illusion that control produces outcomes.

AI doesn’t change the fundamentals of leadership. It exposes them. And the organizations that appear, from the outside, to have “figured out AI early” usually didn’t succeed because of superior technology. They succeeded because they understood something much older: that if you want people to contribute at their best, you have to give them a reason to want to.

Share:

More Posts

Overcoming the misconception that hard work always shows immediate results.

The Effort Illusion

For centuries we have treated effort as proof of value. The harder something looked, the easier it was to trust. AI quietly breaks that assumption. When high-quality work suddenly becomes easier to produce, many organizations do not celebrate the leverage — they question the legitimacy of the result. The real disruption of AI may not be automation. It may be forcing organizations to confront how much they still trust effort more than judgment.

People using virtual reality headsets in a modern office environment.

The Socially Unacceptable Office

Virtual reality promises infinite screens, immersive focus, and workspaces unconstrained by desks or buildings. But even when the technology works surprisingly well, one stubborn question remains: would your workforce actually wear it?

Happy women cycling outdoors with helmets on.

Governance Follows Momentum

Organizations often believe they need governance before innovation begins. In practice, governance usually arrives after experimentation has already created momentum. The challenge is not choosing between innovation and risk management — it is creating the conditions where both can evolve together.

Man overlooking cityscape with solar panels and communication towers at sunset.

Eliminating Latency Between Thought and Execution

What if productivity isn’t about working more hours, but about eliminating the delay between having a valuable thought and acting on it? I’ve made high‑stakes decisions from a rest stop in the Alps and delivered spontaneous presentations from an iPhone — not to prove a point, but because the infrastructure allowed it. The real question is: how much innovation does your organization lose to invisible latency?