Governance Follows Momentum

The Illusion of Starting With Governance

Organizations like to believe innovation begins with governance.

Before experimentation starts, committees assemble. Legal wants clarity on liability. Information security wants guarantees around data handling. IT wants architectural oversight. Leadership wants a strategy.

None of these concerns are misguided. In fact, they are entirely justified.

But something subtle happens when every function tries to define the rules of a technology before that technology has created any visible value inside the organization. Alignment turns into negotiation. Negotiation turns into delay. And delay slowly drains the energy from the initiative.

The paradox appears quickly: the organization is trying to govern something that does not yet exist in practice.

In reality, most meaningful technological shifts inside organizations follow a different path. Someone somewhere proves that a tool creates value. Colleagues notice. Momentum builds. Only then does governance step in to stabilize and scale it.

Governance, in other words, rarely starts innovation.

It usually arrives the moment the organization realizes the experiment is working.

The Risk Is Real! And Ignoring It Is Not the Answer

This does not mean experimentation should happen recklessly.

Using a tool that has not yet been formally approved by legal or information security can introduce real risks. Data protection, intellectual property exposure, contractual implications, and reputational damage are exactly the problems governance exists to prevent.

Pretending those risks do not exist is irresponsible.

But treating governance as a wall that experimentation must somehow climb over is equally counterproductive. When employees feel they must hide their curiosity, the organization loses visibility over how new tools are actually being used.

A far more productive model is co-creation.

Instead of positioning legal and information security as final gatekeepers who evaluate technology after experimentation has already spread, organizations can invite them into the exploration process itself. Their role shifts from veto power to design partner.

This approach acknowledges something important: responsible experimentation requires governance. Not to stop the experiment, but to help shape it in a way that keeps both innovation and risk management aligned.

When governance participates early enough, it gains something invaluable: context. Legal and information security are no longer evaluating abstract risks. They are observing real use cases, real workflows, and real value.

That understanding changes the conversation dramatically.

When Governance Becomes an Accelerator

One of the most interesting dynamics I’ve seen around AI adoption emerged in exactly this way.

In one organization, one of the most enthusiastic advocates for an AI video platform came from an unexpected place: legal.

Instead of focusing exclusively on the potential risks, the legal team saw an opportunity to communicate their own expertise differently. Complex topics such as contract clauses, negotiation pitfalls, and compliance considerations could suddenly be explained through short avatar-based videos that complemented existing written guidance.

A function that is often perceived as formal, procedural, and difficult to approach suddenly became surprisingly engaging.

The impact was immediate.

Employees who would normally treat legal guidance as something obligatory started paying attention. Messages that might otherwise have been ignored gained traction because the messenger had unexpectedly become a technology pioneer.

Legal had become, quite simply, cooler.

At the same time, this adoption created something even more valuable: clarity. Because legal was directly involved in using the technology, they were able to define guardrails for responsible use. That included clear guidance around the ethical use of avatars, transparency requirements when representing real individuals, and practical recommendations for safe communication.

The organization did not become less cautious.

It became more informed.

Momentum Changes the Governance Conversation

When discussions with information security and IT eventually took place, the dynamic had shifted.

The conversation was not about whether the technology should be allowed to exist in theory. Information security still did what information security always does: ask hard questions and evaluate potential risks.

But the discussion was no longer hypothetical.

There was already a responsible internal use case. There were already defined guardrails. There was already a part of the organization that understood both the capabilities and the risks.

Governance was no longer evaluating an unknown threat. It was evaluating an emerging capability.

That difference matters.

Trust Is the Hidden Variable

What often determines whether this kind of collaboration works is trust.

In some organizations, employees assume governance will shut down any experiment before it even begins. In others, governance assumes employees will inevitably misuse new tools if given the opportunity.

Both assumptions create distance.

But when governance participates in experimentation early enough, something different happens. Employees begin to see legal and information security not as obstacles but as partners in building something responsibly. And governance teams gain visibility into how people are actually working.

In cultures where skepticism around new technologies runs particularly deep, that trust can become the decisive factor for adoption.

Sometimes all it takes is one credible voice inside the organization to signal that responsible experimentation is not only possible, but encouraged.

“If they’re comfortable with this,” people begin to think, “maybe it’s worth exploring after all.”

Leadership’s Real Responsibility

This reveals something important about leadership in the age of AI.

Leadership does not need to choose between innovation and governance. The real challenge is creating an environment where both can evolve together.

Employees need the freedom to explore new capabilities and discover what they make possible. Governance needs the opportunity to understand those experiments early enough to shape them responsibly. And leadership needs the discipline to recognize when experimentation has created enough momentum to justify formal structure.

Governance works best when it stabilizes something that already exists.

Trying to govern innovation before it has demonstrated value rarely produces clarity. It produces hesitation.

The Moment Governance Finally Arrives

In the end, governance is not the starting point of adoption.

It is the moment an organization realizes the experiment is no longer just an experiment.

Someone has already demonstrated that the tool creates value. Colleagues are beginning to replicate the result. Conversations are spreading. Momentum has formed.

At that point, governance has a clear role: to ensure that the capability becomes safe, sustainable, and scalable.

Innovation and governance were never meant to be adversaries.

But the question every organization should ask itself is simple:

Have you given your employees enough trust to start that collaboration in the first place?

Share:

More Posts

Eliminating Latency Between Thought and Execution

What if productivity isn’t about working more hours, but about eliminating the delay between having a valuable thought and acting on it? I’ve made high‑stakes decisions from a rest stop in the Alps and delivered spontaneous presentations from an iPhone — not to prove a point, but because the infrastructure allowed it. The real question is: how much innovation does your organization lose to invisible latency?

Why Most AI Strategies Are Built Backwards

Most AI strategies don’t fail because they are wrong. They fail because everyone involved is right. IT wants control. Legal wants safety. Leadership wants an edge. And while alignment is negotiated, momentum evaporates. AI does not reward perfect planning. It rewards lived leverage. Until someone experiences real cognitive relief, strategy remains theatre.

Make the Impossible Possible

When creating a structured video becomes easier than drafting a long email, behavior changes. Video stops being a department and becomes a capability. AI tools like Synthesia don’t just improve communication: they lower the cost of expression to the point where “impossible” becomes routine. And once someone experiences that shift, the question is no longer whether it’s human enough. It’s whether you’re willing to let competitors normalize a capability you’re still debating.

AI is the First Cognitive Revolution

AI is not steam power for the mind. It is something fundamentally different: a tool that amplifies cognition before it replaces it. Unlike past revolutions that automated muscle and logistics, AI enhances judgment, synthesis, and context handling — for now. Whether this becomes a mass replacement event or a leap toward collective wellbeing depends less on the models and more on how we choose to adopt them.