Information in conflict settings in the age of AI – a humanitarian perspective (by Robert Mardini)
This is a good post by Robert Mardini, the Director General of the International Committee of the Red Cross. It covers a wide range of AI and other digital issues the ICRC is concerned about. He starts off by setting out many of the risks that emerged/ing technologies and information disruption operations bring to warfare, including the risks that land at the doorsteps of humanitarian actors when they fall prey to mis/disinformation and internet shutdowns. He explains how this has the potential to significantly disrupt their work, not least of all when their work is enabled by digital tools and connectivity. But, with their work slowed or stopped, even more significant are the harms that extend outward to the very people humanitarian actors are trying to support.
The quote below doesn’t cover all the risks Robert sets out, but it gives you a good sense of what the ICRC is worried about:
Digital forms of mis- and disinformation and hate speech utilising AI (such as deep fakes) are growing more quickly than our ability to keep pace with them. As they become increasingly diffuse and indiscriminate, it is increasingly difficult to distinguish truth from lies. Even those with expertise in content verification and open-source investigation are increasingly in disagreement with one another.
Our big concern is that in situations of conflict, the choices made on the basis of such information may be a matter of life or death. This is where resilience to these threats is lowest, and the likelihood of devastating consequences is the highest. We must remember too that communication and information is a form of aid itself. The right information at the right time and in the right format can save lives, prevent further harm and protect homes and livelihoods.
In terms of how the ICRC is responding to the stresses and strains of mis/disinformation (which must never be conflated or confused with legitimate criticism of the humanitarian sector), Robert offers this reflection before listing many of the legal obligations that place limits on wartime technology:
How humanitarian organisations respond to such issues – in a highly polarised and instrumentalised space – is complex, to say the least. We still have a long way to go in unpacking the limits, risks – and benefits - of digital technologies and tailoring our protection response accordingly.
I’ll leave it to readers to explore the rest of his post, but for those interested in a quick survey of how the ICRC thinks about digital transformation and armed conflict, it’s a worthwhile post to explore.