Friday 5

AI, conflict, and rethinking responsibility

20 March, 2026

The early weeks of the US-Israel-Iran war have exposed an unsettling reality: a parallel conflict unfolding not on the ground, but online. A flood of AI‑generated images and videos, showcasing explosions that never occurred, devastated neighbourhoods that were never hit, and soldiers who never existed, now sweep across social media platforms.

In a conflict already charged with geopolitical complexity, this synthetic content can deepen public confusion, fuel confusion and fear, and amplify tensions. As researchers noted, these tools have enabled anyone, anywhere, to fabricate a war scene in minutes. 

Across X, TikTok, Facebook and private messaging apps, these AI fakes were viewed millions of times. So far, responses from social media platforms remain inconsistent and reactive. One recent example comes from Elon Musk’s X, which announced it would temporarily suspend revenue for accounts sharing AI‑generated “armed conflict” content without proper labelling. However, synthetic content often circulates widely long before moderation systems identify it, highlighting a gap between the speed of technological change and the slower pace of platform governance. 

This raises a critical question for business: how should companies respond when their tools can be repurposed in ways that intensify conflict or undermine public trust? The recent stand‑off between AI developer Anthropic and the US government illustrates this tension. Anthropic declined to provide defence agencies with unrestricted access to its models, citing concerns about surveillance and autonomous weapons. The resulting decision by the Trump administration to designate the company a “supply chain risk” underscore the pressure companies can face when attempting to prioritise safety and responsible governance in politically charged moments. 

For tech companies, these questions about the use and misuse of their tools are fast becoming the ultimate test of responsibility. The environment is complex and fast‑moving, but this is exactly when firms need to draw on every resource available:  from engaging diverse stakeholders to exploring new forms of governance, transparency and moderation. Working with partners, experts, governments and critical friends will be essential. As AI becomes ever more embedded in how conflicts are seen and understood, responsible leadership will depend on companies’ willingness to act thoughtfully, collaboratively and with clear intent. 

By Hillevi Fock

You might also like

Sign up for Friday 5, your weekly sustainability digest