Why Your Best People Are the Biggest Obstacle to AI Adoption — And What to Do About It
The resistance that kills enterprise AI programmes rarely comes from the bottom of the org chart.
When enterprise AI programmes stall, leadership tends to reach for familiar explanations. The data wasn't clean enough. The vendor overpromised. The integration was more complex than anticipated. These are convenient answers because they point outward — at systems, at suppliers, at circumstances. The harder conversation is the one about people. Specifically, about the people organisations rely on most.
The uncomfortable truth about AI resistance
In our experience working with complex organisations through AI adoption, the most consistent pattern we observe is this: the individuals who create the most friction around AI are rarely disengaged employees or change-resistant middle managers. They are, more often, the high performers. The subject-matter experts. The people whose judgement the organisation trusts most deeply.
Understanding why this happens — and what to do about it — is one of the most important things a leadership team can get right.
"The individuals who create the most friction around AI are rarely disengaged employees. They are, more often, the high performers."
Why high performers resist
It is tempting to frame resistance as fear. Fear of redundancy, fear of irrelevance, fear of being replaced by a system that does not understand the nuance they have spent years mastering. There is truth in this, but it is an incomplete picture — and acting on it leads to the wrong interventions.
High performers resist AI adoption for three more substantive reasons.
Their value is inseparable from their knowledge. AI, in their eyes, is not a tool that augments this — it is a system that attempts to commoditise it. That is a rational concern, not an irrational fear.
High performers apply rigorous standards to their own work. When AI produces something shallow or imprecise, they conclude it is not fit for purpose — and refuse to associate their credibility with outputs they do not trust.
AI adoption is most compelling to people who feel constrained by existing tools. High performers have found ways to operate well regardless. The perceived upside is lower. The perceived risk to their standing is higher.
The organisational dynamic this creates
Left unaddressed, this dynamic produces a specific and damaging pattern. High performers signal — sometimes overtly, more often through behaviour and tone — that AI tools are not to be taken seriously. Because these individuals carry influence, their scepticism is interpreted as informed judgement rather than personal resistance. Teams take their cue accordingly.
Adoption stalls not through active opposition but through the quiet withdrawal of credibility. Meanwhile, the people who do engage enthusiastically with AI tools are often those with less organisational standing. Their early use cases are visible, imperfect, and easily criticised. The critics oblige. This reinforces the narrative that AI produces mediocre work, and the cycle continues.
Leadership, watching adoption metrics fail to move, typically responds with more communication, more training, or more mandated use. None of these interventions address the actual dynamic. They accelerate the theatre of adoption without changing the underlying reality.
What the most effective organisations do differently
The organisations that navigate this well share a common approach: they make their best people the architects of AI adoption, not the audience for it.
This is not simply a matter of involvement or consultation. It is a structural decision about where design authority sits. When a senior underwriter, a specialist legal counsel, or a veteran operations lead is given genuine responsibility for determining how AI integrates into their domain — including the authority to reject approaches that do not meet their standards — the dynamic shifts entirely.
Several things happen. Their expertise shapes the implementation in ways that make it genuinely better. Their ownership removes the psychological distance between their professional identity and the AI-augmented version of their role. Their visible engagement changes the signal their teams receive. And critically, they have skin in the outcome.
This approach requires that leadership accepts something difficult: the experts may slow things down, insist on higher standards, and push back on vendor timelines. In the short term, this is frustrating. In the medium term, it is the difference between adoption that holds and adoption that quietly collapses.
The questions worth asking now
If you are in the early or middle stages of an enterprise AI programme, three questions are worth sitting with honestly.
The answers will tell you more about the likely trajectory of your programme than any adoption metric currently on your dashboard.