Personalisation at Scale Has a Trust Problem — and AI Just Made It Worse
The more capable AI becomes at knowing your customers, the less comfortable those customers feel being known. Understanding this paradox is now a strategic imperative.
Something strange is happening at the frontier of consumer AI. The technology is becoming genuinely remarkable — able to anticipate preferences, pre-empt needs, and deliver experiences that feel almost uncannily relevant. And yet consumer trust in the companies deploying it is not rising to match. In many categories, it is going the other way. The more personalised the experience becomes, the more unsettled people feel receiving it. This is the central paradox of AI-driven personalisation, and most organisations are not close to resolving it.
The personalisation promise — and the bill that came with it
For two decades, personalisation has been the dominant ambition of consumer-facing technology. The logic was elegant: understand your customer better, serve them more relevantly, and loyalty will follow. In aggregate, this proved broadly true. Recommendation engines drove meaningful revenue uplift. Tailored communications outperformed generic ones. The data spoke clearly enough that investment continued, accelerated, and compounded.
What the data did not surface — because it was gradual, diffuse, and hard to attribute — was a growing sense of unease accumulating on the other side of those interactions. Customers were receiving better experiences in a narrow, transactional sense. But they were also becoming aware that the conditions enabling those experiences — constant data collection, persistent inference, opaque profiling — were not things they had consciously agreed to. Personalisation had been improving the transaction while quietly eroding the relationship.
AI has not created this tension. But it has accelerated it to the point where it can no longer be managed incrementally. The capabilities that seemed theoretical eighteen months ago — real-time behavioural inference, dynamic pricing calibrated to individual willingness to pay, content sequenced to maximise engagement rather than satisfaction — are now in production. And consumers, even without detailed knowledge of how these systems work, are beginning to sense something they cannot quite name.
"Consumers cannot always articulate what makes an AI interaction feel extractive rather than helpful. But they feel the difference — and they act on it."
The numbers that should concern any personalisation strategy
The trust gap is not anecdotal. A consistent pattern has emerged across consumer research in recent years: the majority of consumers express discomfort with the level of data collection underpinning personalised experiences, even when they acknowledge those experiences are useful. The two feelings coexist — and they do not cancel each other out.
The last data point is the one that deserves the most attention. Getting personalisation wrong in the direction of irrelevance is a missed opportunity. Getting it wrong in the direction of intrusiveness actively damages trust — at a rate twice as severe. AI, by making personalisation more capable, has also made the cost of misjudgement higher.
Three forces compounding the problem
The trust challenge is structural, not executional. It is not primarily about bad actors or poor data security. It is about three forces that have been building for years and are now operating simultaneously.
AI systems can now infer things consumers have never disclosed — financial stress, health concerns, relationship status, emotional state. When these inferences surface in product or content decisions, the effect is not just surprising. It is, for many people, violating. The gap between what was shared and what was inferred has become too wide to feel comfortable.
The consent infrastructure built around personalisation — cookie banners, preference centres, privacy policies — has collapsed under its own weight. Consumers click through without reading. The legal record of consent exists; the informed consent it is supposed to represent does not. Regulatory frameworks like the EU AI Act are beginning to reflect this gap.
Dynamic pricing algorithms calibrated to individual willingness to pay represent a precise example of personalisation that serves the organisation, not the customer. When AI's personalisation capabilities are primarily used to optimise extraction rather than experience, consumers sense it — even when they cannot prove it. This is the moment trust becomes a competitive variable.
"Ethical personalisation is not a constraint on commercial ambition. It is increasingly the condition for it."
What ethical personalisation could actually look like
The phrase "ethical personalisation" risks becoming precisely the kind of corporate language that signals a problem is being managed rather than solved. It is worth being specific about what it means in practice, because the organisations getting this right are not doing so through values statements — they are doing so through design decisions.
Legible data exchange. Customers should be able to understand, in plain terms, what data is being collected and what it enables — not in legal language buried in a policy document, but in the interface itself, at the moment of exchange. This is a design problem as much as a legal one, and it is one that AI can actually help solve: interfaces that explain their own inference in natural language are now technically feasible.
Personalisation that serves the customer's goals, not the organisation's. The test for any personalisation decision should be: does this make the customer's life genuinely better, or does it primarily serve our conversion or retention targets at the customer's expense? These are not always in conflict. But when they are, which one wins is a cultural and governance question as much as a commercial one.
Real control, not performative control. Preference centres that are difficult to find, harder to use, and have no discernible effect on what the customer experiences are not a solution — they are a liability. Genuine control means that when a customer limits data use, that limitation is respected in a way they can observe. AI makes this harder to do at scale; it also makes the failure to do it more visible.
The regulatory context is changing faster than most strategies
For organisations that have deferred this conversation in the expectation that the regulatory environment will stabilise before it demands action, that expectation is now running out of time. The EU AI Act's provisions on high-risk AI applications, the continuing evolution of GDPR enforcement, and the emerging frameworks around automated decision-making are not independent developments — they are the leading edge of a sustained shift in how governments are choosing to balance commercial capability against consumer protection.
The organisations that will navigate this shift most effectively are not those with the most sophisticated compliance infrastructure. They are those that have resolved the underlying question — whose interests does our personalisation strategy ultimately serve — before the regulator asks it on their behalf.
That question, answered honestly, tends to surface a gap between what organisations say about customer centricity and the design decisions that reflect their actual priorities. Closing that gap is not primarily a legal exercise. It is a strategic one.
Questions that will not wait for the next strategy cycle
If you are responsible for personalisation strategy, customer experience, or AI deployment in a consumer-facing context, these questions are worth pressure-testing now.
Ethical personalisation is not a constraint on commercial ambition. It is increasingly the condition for it. The organisations that recognise this first will have a meaningful advantage over those that reach the same conclusion under regulatory or reputational pressure.