55 When Algorithms Decide for Us
AI-Driven Persuasion, Personalisation, and the Future of Choice
Behavioural economics revealed that humans rely on shortcuts. Artificial intelligence takes this insight and scales it beyond anything earlier economists could imagine.
Traditional advertising spoke to crowds. AI-driven advertising speaks to individuals.
Every click, pause, scroll, purchase, search, or hesitation becomes data. Algorithms do not need to understand you as a person. They only need to recognise patterns. Over time, they learn which images trigger attention, which words create urgency, which prices feel acceptable, and which moments make you vulnerable to influence.
This is not science fiction. It is how modern markets already operate.
Platforms do not simply show adverts. They predict behaviour. They test variations constantly. Thousands of messages are tried simultaneously, adjusted in real time, and refined based on response. The message you see is not chosen because it is informative, but because it is statistically effective for someone like you, at that moment.
From a microeconomic perspective, this changes the nature of demand.
Demand no longer reflects stable preferences waiting to be discovered. It becomes something that is actively shaped. What people want is influenced by what algorithms decide to show them. The boundary between preference and persuasion blurs.
AI does not just respond to heuristics. It learns them.
If loss aversion works, the system amplifies it.
If scarcity increases clicks, scarcity is simulated.
If emotional content spreads faster, emotion is prioritised.
Over time, markets become optimised not for welfare, truth, or quality, but for engagement.
This creates a new form of power.
Large firms gain an advantage not simply because they produce better goods, but because they control data, infrastructure, and attention. Smaller firms struggle to compete, not on price or quality, but on visibility. Entry barriers rise silently. Markets appear competitive on the surface, while becoming increasingly concentrated underneath.
For consumers, choice remains formally intact. No one is forced to buy. Yet choices are filtered, ranked, nudged, and framed before they ever appear. What feels like freedom is often a carefully designed path.
This raises a profound democratic question.
If citizens’ preferences are continuously shaped by opaque systems optimised for profit, can markets still be said to reflect collective will? Or do they increasingly reflect the objectives of those who design and control the algorithms?
AI-driven persuasion also affects politics, culture, and public discourse. The same tools used to sell products can be used to sell ideas, identities, and fears. When persuasion becomes personalised, shared reality weakens. People no longer respond to common messages, but to tailored narratives that reinforce existing beliefs.
This fragments society.
From the perspective of Better Together, this is a critical turning point. Markets and democracies both depend on trust, transparency, and shared understanding. When influence becomes invisible and asymmetric, those foundations erode.
None of this implies that AI must be rejected. Like all technologies, its effects depend on governance.
AI can be used to support better choices. It can highlight sustainable options, reduce waste, personalise learning, or help consumers compare products honestly. But this requires rules. Transparency about targeting. Limits on manipulation. Accountability for outcomes.
Democratic societies face a choice.
They can treat AI-driven persuasion as a private optimisation problem, leaving firms free to extract value from attention and bias. Or they can treat it as a collective issue, recognising that autonomy, dignity, and trust are public goods that markets alone will not protect.
At the micro level, firms decide how far to push behavioural insights. At the macro level, societies decide where persuasion ends and manipulation begins.
This is not a technical question. It is a moral and political one.
And it brings us back to the central theme of this book. Markets work best when individual initiative is aligned with collective well-being. AI makes misalignment easier. Governance makes alignment possible.
Understanding AI-driven persuasion is therefore not optional. It is essential for anyone who wants to participate responsibly in modern markets, whether as a consumer, entrepreneur, manager, or citizen.
Because when algorithms learn how to influence us better than we understand ourselves, protecting freedom requires more than choice. It requires awareness, institutions, and shared responsibility.
Further Reading and Exploration
AI, persuasion, and behavioural targeting
- Zuboff, S., The Age of Surveillance Capitalism
- Varian, H., “Artificial Intelligence, Economics, and Industrial Organisation”
Behavioural manipulation and ethics
- Akerlof, G. and Shiller, R., Phishing for Phools
- Sunstein, C., Too Much Information
Markets, democracy, and digital power
- Khan, L. M., “Amazon’s Antitrust Paradox”
- Acemoglu, D. and Johnson, S., Power and Progress
Regulation and governance
- OECD, AI, Data Governance and Privacy
- EU Commission, Digital Markets Act and Digital Services Act