Manufacturing Momentum Before You Write the Playbook
Most AI strategies do not fail because they are poorly designed. They fail because they are designed too early, at a moment when everyone involved is right.
IT is right to insist on infrastructure and security. Legal is right to demand compliance. HR is right to question cultural implications. Operations is right to protect stability. Executives are right to search for competitive advantage. None of these positions are misguided. And yet when every function asserts its legitimate claim simultaneously, alignment becomes negotiation, negotiation becomes delay, and delay slowly drains the energy from the initiative.
The result is a strategy that is technically robust, politically vetted, and practically inert. Organizations attempt to govern something that has not yet proven its value. They write the playbook before anyone has scored. AI, however, does not respond well to that order of operations. It is not infrastructure waiting to be installed; it is leverage waiting to be experienced. Without that experience, strategy remains speculation.
Momentum Precedes Governance
The shift I observed in semi-successful AI initiatives never began with a better presentation. It began with a contained experiment.
A department quietly ran a proof of concept. A team adopted an assistant to remove a specific friction in their workflow. A narrowly defined use case delivered something tangible; not abstract advantage, but measurable relief. Time compressed. Clarity improved. Effort reduced. The result was not theoretical value; it was lived benefit.
That moment changed the conversation more effectively than any steering committee could. It manufactured momentum.
Once people could see real leverage in action, the debate stopped circling around whether AI belonged in the organization and moved toward how to scale it responsibly. Governance did not disappear; it became purposeful. Compliance discussions were no longer defensive abstractions but structured negotiations around something already working. Centralization began to coordinate success rather than attempt to predict it.
Momentum preceded governance. Not the other way around.
This inversion challenges traditional leadership instincts, particularly in environments where strategy is equated with control. But AI behaves differently. Until it has been felt at the operational level, no amount of structural planning will generate traction.
What Leadership Seeks vs. What Employees Feel
Leadership intent is usually clear and rational: gain an edge.
At the employee level, however, the emotional landscape is more complex. When AI strategy is announced rather than discovered, those closest to the work often experience not excitement but exposure. A tool presented as leverage can easily be interpreted as a signal of replacement, especially when the individuals most affected were not invited into the ideation.
This is not resistance born of ignorance. It is a predictable response to uncertainty. The expertise that could amplify AI’s impact withdraws precisely when it feels excluded from shaping how the tool is used.
Empowerment cannot be mandated into existence. It must emerge from participation. And participation requires proof — proof that the tool removes friction rather than simply adding oversight.
The Hybrid That Actually Wins
The initiatives that gained traction shared a specific structural tension: clarity from the top, experimentation from the bottom.
Company objectives matter. Without directional alignment, experimentation drifts and local optimizations fail to translate into organizational advantage. But experimentation generates belief. When employees create visible “wow” moments that clearly support strategic goals, intrinsic motivation strengthens. Peers observe the impact, curiosity spreads, and a subtle shift occurs: people begin to ask not whether AI is useful, but how they can use it effectively.
This is where organizations move beyond rollout and begin manufacturing momentum.
At that point, strategy stops being predictive and becomes reflective. It codifies success that has already demonstrated its relevance. Governance, then, does not constrain innovation; it stabilizes it.
My Own Resistance
I was not immune to skepticism.
For months, I treated AI assistants as occasional curiosities. I tested them sporadically, more to understand the hype than to rely on them. The promise felt inflated, and the value seemed situational at best. Like many others, I assumed that if the leverage were truly transformative, it would be obvious.
It was not obvious. It was experiential.
Only when I forced myself to identify a genuine friction in my own workflow, not an impressive demo, but a recurring irritation, did the relationship change. Once I experienced cognitive relief rather than novelty, my behavior shifted permanently. The tool itself had not evolved overnight; my understanding of how to integrate it had.
Organizations frequently attempt to skip this stage. They enable features and expect adoption to follow. But enablement teaches mechanics. Empowerment removes friction. Only one changes behavior in a lasting way.
Rewarding the Right Risk
AI compresses validation cycles and reduces the cost of exploration. It allows ideas to be tested, refined, or abandoned faster than traditional processes ever allowed. That structural shift should logically alter how organizations think about experimentation.
Yet many still apply the psychological weight of traditional capital investments to AI initiatives. The rhetoric of “fail fast, fail cheap” is attractive until failure becomes visible. When employees sense that experimentation carries reputational risk disproportionate to its cost, they revert to caution.
If AI lowers the barrier to trying, incentive structures must evolve to match. Those who identify high-leverage use cases and test them responsibly generate outsized value, even when early iterations are imperfect. The organizations that internalize this do not celebrate recklessness; they reward disciplined curiosity.
The universe has always favored those who dare to experiment. AI simply makes that daring more accessible.
The Replacement Horizon
At present, AI functions primarily as an exoskeleton. It enhances capable individuals, amplifies expertise, and accelerates cognition. Attempts at full replacement have often proven expensive, unstable, or strategically underwhelming. Enhancement, by contrast, consistently produces advantage.
But it would be naïve to assume that this balance is permanent.
If iterative improvement continues, the enhancement phase may gradually give way to substitution in domains currently considered protected. Society is not yet structurally prepared to absorb a significant reduction in cognitive labor, even if such a shift ultimately leads to abundance rather than scarcity. The same force that can eliminate mental drudgery could, if mismanaged, create profound dislocation.
There will never be a lack of arguments against progress. But this does not contradict its inevitability.
The present moment, where enhancement dominates replacement, is a window. Organizations can use it to build cultures that empower people to generate leverage responsibly. Or they can wait until external pressure forces reactive transformation.
Only one path manufactures momentum before it becomes mandatory.




