Friday 5

Invisible influence

17 April, 2026

Chances are you’ve used AI recently, perhaps to write an email, plan a trip, or a question. It is quick, useful, and efficient in creating desired outputs in seconds. You’re probably also aware of the debate blazing around it – particularly focusing on whether it’s accurate, biased and safe.

But a recent open letter published on Wellcome Open Research argues that something important is missing from the conversation, and that it’s particularly relevant given that AI is not just producing outputs, but starting to shape behaviour too. And that’s the discipline of behavioural science.

The fact that AI shapes behaviour is becoming increasingly obvious. In a previous Friday 5, we looked at how AI chat conversations are having an impact on how people seek reassurance, validation, and advice, including in important areas like healthcare, education, and finance. But, as the letter argues, these aspects of AI are less examined, and show up less in governance models and debates around the ethics and rights of AI. As the authors point out, systems may perform well on technical metrics while still shaping behaviour in ways that undermine people’s interests or wellbeing.

Behavioural science studies how people think, feel, and act. It offers well-established methods for understanding influence, motivation, trust, and decision making. We at Good Business know this well, and use many behavioural techniques in our SKY Girls social marketing programme.

This means behavioural science is ideally positioned to contribute to the way in which AI is developed, used and evaluated. The best approach would be one where behavioural considerations are integrated systematically across the AI lifecycle, using existing expertise and methods. The letter recommends bringing behavioural scientists into AI design from the start, and building behavioural evaluation into how systems are tested and monitored over time, not just at launch.

And from a governance perspective, there is a need to explicitly recognise behavioural safety as part of responsible AI. Clear accountability, access to appropriate expertise, and transparency about behavioural evaluation would all go towards supporting the development of AI systems that are more trustworthy and effective in real-world use.

We know that most products and services shape behaviour in varied ways. AI has the potential to do it faster, more frequently, and with greater impact. The question is not whether that influence exists, but whether we use the tools we have to take responsibility for it.

By Siri Venkatesh