Companies believe they are making rational decisions about AI. In reality, they are optimizing for what is safest to defend, not what creates the most value, creating an AI governance vs productivity dilemma.
The Rational Choice… and the BMW
When people choose a car, there is always the rational option. It is the family station wagon. Reliable, efficient, predictable, easy to justify. It gets you from A to B without surprises, and if anyone ever questions your decision, you can explain it in a sentence and move on. It represents everything that organizations tend to value on paper: consistency, safety, and the absence of unpleasant explanations to your wife.
And then there is the BMW. Not because it is irrational, but because it optimizes for something different. It is about how it feels to get into the car every morning, how it responds, how it changes your relationship with the road. It is harder to defend in a spreadsheet, but easier to justify once you have experienced it. It is not just transportation. It is performance, engagement, and a willingness to accept that better outcomes sometimes require a different kind of decision.
Organizations behave in exactly the same way when it comes to AI. They believe they are making rational, well-governed decisions grounded in security, compliance, and responsibility. But more often than not, they are not optimizing for performance. They are optimizing for defensibility. For the ability to explain a decision later without exposing themselves to unnecessary risk. And in that system, the family station wagon wins almost every time, not because it is better, but because it is easier to defend.
The System That Rewards the Safe Choice
To understand why this happens, you have to look at how decisions are actually made inside organizations, not how they are described in strategy decks. Saying no is cheap. It requires very little effort, carries almost no visibility, and distributes almost no personal risk. The decision is quick, the conversation ends, and there is no further obligation to engage. If something goes wrong later, you are not part of the story. You were the one who chose caution, and caution is rarely punished.
Saying yes, on the other hand, is expensive in ways that are not immediately visible but deeply felt by the people making the decision. It requires evaluation, alignment across teams, documentation, risk assessment, and often a degree of ongoing ownership that extends far beyond the initial approval. It means that if something fails, there is a clear line back to the decision that enabled it. It creates accountability in a way that saying no never does, and that alone is enough to shape behavior.
The result is predictable. Rational people inside this system will default to the cheaper option, not because they lack ambition or understanding, but because they are responding correctly to the incentives around them. The system does not reward the best decision. It rewards the most defensible one. And “no” is always easier to defend than “yes,” especially in environments where the cost of being wrong is visible and the value of being right is often delayed.
When “No” Looks Responsible (But Isn’t)
I experienced this dynamic in a way that was almost comical. We had a tool, Synthesia, that was placed on the “no” list almost by default. Not after a deep evaluation, not after a structured assessment of its risks and benefits, but because it represented something new, something that required effort to properly understand. The fastest way to close the loop was to block it.
From a distance, that decision looked responsible. It signaled caution, discipline, and control. But up close, it was simply convenient. It avoided the work of understanding the tool and deferred any need for deeper engagement. It was a decision optimized for speed, not accuracy.
The situation shifted when one of the strongest proponents of the tool turned out to be our regional legal counsel. At that point, the conversation could no longer be closed with a simple rejection. It had to be examined properly. And what followed was exactly what should have happened in the first place. The tool was evaluated, compliance concerns were addressed, and the actual risk profile was understood.
The important lesson was not that Synthesia was eventually allowed. It was that the initial decision had nothing to do with reality. “No” wasn’t safer. It was just faster. And without the right person pushing back, the company would have quietly rejected value by default.
Where the Cost Actually Goes
When organizations say no, they often assume that they have eliminated risk. In reality, they have simply moved it somewhere else. Employees do not stop trying to be productive. They adapt. They create workarounds. They use personal tools, manual processes, and fragmented systems to achieve the same outcomes in ways that are harder to track and often less secure.
This is where the Interface Tax becomes visible. Work slows down, not because it is inherently complex, but because it is artificially constrained. Information needs to be copied, reformatted, and reconstructed across systems that refuse to integrate properly. Context is lost between tools, forcing people to rebuild it repeatedly. Tasks that could be automated become manual again, not by necessity, but by design.
The risk does not disappear. It becomes harder to see. It moves into shadow IT, into unofficial workflows, into the invisible layer of “how work actually gets done” that rarely appears in governance discussions but defines the real operating model of the organization.
Why AI Governance vs Productivity is an Incentive Question
At the center of this problem is not a bad decision, but a broken incentive structure. Infosec teams are measured by the risks they prevent. Business teams are measured by the outcomes they deliver. Both are acting rationally, both are doing their jobs, but the system they operate in is fundamentally misaligned.
One team is rewarded for stopping things from happening. The other is rewarded for making things happen faster. Alignment is almost impossible when success is defined in opposing terms. As long as these incentives remain separate, conflict is inevitable, and more importantly, the safest behavior will always dominate.
This is not a failure of individuals. It is a failure of design. The system produces exactly the behavior it incentivizes, and right now, it incentivizes caution over progress, defensibility over performance, and isolation over collaboration.
Why “Yes” Feels So Dangerous
There is also a deeply human layer to this dynamic that is often overlooked. Saying yes does not just allow progress. It creates ownership. It means that if something goes wrong, there is a clear and visible connection to the decision that enabled it. Saying no, by contrast, creates distance. It removes you from the chain of accountability and places the burden elsewhere.
In environments where accountability is unevenly distributed, this matters significantly. People naturally avoid decisions that increase their personal exposure, especially when the reward for taking that risk is unclear, delayed, or shared across many stakeholders. The consequence is a culture where enabling progress feels inherently more dangerous than blocking it, even when the opposite is true from a system perspective.
The Shift: From Gatekeepers to Co-Designers
If this dynamic is going to change, the role of Infosec needs to evolve. Not away from security, but towards system design. The objective should not be to approve or reject tools in isolation, but to define how they can be used safely within a broader, interconnected environment. This requires earlier involvement, deeper context, and a willingness to engage with how work actually happens, not just how it is supposed to happen.
Infosec teams should not be positioned as blockers, but as co-designers of a system where productivity and security reinforce each other. They should benefit from enabling better tools, not just from preventing their use. The question should not be whether something is allowed, but how it can be integrated in a way that balances risk and value over time.
What a “Safe Yes” System Looks Like
A more effective approach does not eliminate caution. It structures it. Instead of binary decisions, organizations can create environments for controlled experimentation where new tools are introduced under clear conditions. Sandbox setups, conditional approvals, monitored usage, and continuous feedback loops allow organizations to learn in practice, rather than speculate in theory.
To understand what this looks like in reality, imagine a team identifying a new tool that could significantly improve how they work. In a traditional setup, that request would travel through layers of approval, each one optimized to minimize risk and effort, often resulting in a delayed or negative outcome. In a system designed for “safe yes,” the same request triggers a different response. Instead of being evaluated as a binary decision, it is treated as a hypothesis. What value could this tool unlock? What risks does it introduce? And how can those risks be constrained in a controlled environment?
The tool is not rolled out immediately across the organization. It is introduced in a limited, observable context. A sandbox environment is created, access is granted to a defined group, and clear boundaries are established around what data can be used and how. Infosec is not reviewing from the outside, but actively shaping the conditions of use, ensuring that guardrails are in place without suffocating the experiment itself.
As the tool is used, data is collected. Not just technical metrics, but behavioral ones. How do people actually interact with it? Where does it create value? Where does it introduce friction or unexpected risk? Instead of guessing upfront, the organization learns in motion. Decisions are no longer based solely on theoretical risk assessments, but on observed reality.
At the same time, the experience of the people using the tool begins to change. Instead of fighting interfaces, copying information between systems, and reconstructing context manually, they start to experience what it feels like when tools are allowed to operate as intended. Work becomes more continuous, less fragmented. The overhead that previously defined their day begins to shrink.
This is where the connection to hardware becomes visible as well. A smartwatch that surfaces relevant information at the right time, a phone that drafts responses in context, a laptop that structures knowledge automatically, and systems that share context across devices are no longer isolated improvements. They become part of a coherent environment where the interface itself starts to disappear.
Over time, the organization builds confidence. Not because risk has been eliminated, but because it has been understood, constrained, and managed in a structured way. The initial experiment expands, the guardrails evolve, and what started as a controlled trial becomes a new standard. This is what “safe yes” looks like in practice. It is not reckless. It is not slow. It is deliberate, adaptive, and fundamentally more aligned with how modern technology actually works.
The Companies That Will Win
The organizations that benefit most from AI will not be the ones with the most advanced models. They will be the ones that design their systems to allow those models to be used effectively. They will invest upfront in the cost of saying yes well, understanding that this effort is not overhead, but infrastructure.
They will accept that there is an initial investment, both in effort and in alignment, that pays off over time as systems become more capable and less restrictive. They will understand that risk cannot be eliminated, only managed, and that managing it well requires participation, not avoidance.
These organizations will move faster, not because they are reckless, but because they are structured differently. They will not be defined by the tools they use, but by the environment they create around them.
The Real Question
In the end, this is not a technology problem. It is a decision-making problem, a design problem, and ultimately a cultural problem. It comes down to a simple but uncomfortable question: are you optimizing for avoiding mistakes, or for enabling progress safely?
Or, to put it differently, are you choosing the family station wagon because it is the best option, or because it is the easiest one to defend? And if that is the case, what would it take to be the person, or the team, that dares to choose the BMW?





