In the shadow of a crumbling building in a war-torn city, a young woman clutches her child, her eyes reflecting a tapestry of fear, hope, and resilience. Miles away, an AI system sifts through thousands of such images and narratives, constructing a digital mosaic of the crisis. This is the new frontier of storytelling in our digital age, where artificial intelligence (AI) plays a pivotal role in documenting human experiences, just as the Danish Refugee Council is doing with the ongoing humanitarian crisis in Nigeria, Niger, and Mali. But as AI becomes an ever-present narrator of our times, the ethical concerns surrounding the use of AI in humanitarian contexts are multifaceted, particularly when it comes to consent and the potential for harm.
The extensive availability of humanitarian data presents significant implications for the development and training of AI models, particularly in terms of how these models understand and interpret complex social and humanitarian issues. For instance, AI platforms were deployed to analyse user-generated videos and images shared during the early stages of the Syrian uprising, which began as a protest and intensified into a conflict in 2011. AI tools were used to analyse and categorise this vast amount of social media content, offering historical insights into the on-ground situation and giving an idea of the possibilities of what can be done with humanitarian data.
In the context of a humanitarian crisis, the widespread dissemination of images and videos, despite being well-intentioned, significantly increases the likelihood of text and data mining activities for the training of AI tools. I am concerned that this raises issues about privacy and the portrayal of personal suffering in an era of data mining for AI training, as users who put out this information may not have understood that images and videos put out on social media are open to harvesting without consent. A social media user in crisis raising awareness about their situation may not have agreed for their data to be used in a long-term manner, especially for purposes they are unaware of. This raises significant concerns, especially when it involves personal or sensitive data. This is often true for images and videos, which can expose much about a person’s private life and perpetually portray them in a crisis context rather than reflecting their broader identity.
Given that a large portion of the humanitarian images and videos available originate from the Global South, particularly African nations, using these for the curation and training of AI models could lead to biases in the outputs of generative AI. A good example of this is that, at the time of writing this piece, DALLE-3 still sees a happy, wealthy child as a Caucasian toddler.
In another example, during the Rohingya crisis in Myanmar, which intensified around 2017, AI tools helped document human rights abuses. However, Meta faced criticism for the propagation of anti-Rohingya content, for not being transparent about its content algorithm operations, and for the propagation of harmful anti-Rohingya.
While digitisation serves as a powerful evidence tool for advocacy, it also sparked debates about the consent of those whose suffering was being digitised and shared worldwide. I strongly believe that the widespread images of suffering portray Africa as a hub for poverty, opening up its inhabitants for exploitation by big corporations looking to harvest human data for AI model training.
An example of this exploitation happened in Kenya most recently. In July 2023, Worldcoin, co-founded by Sam Altman of OpenAI, offered Kenyans about $50 to scan their eyes for AI verification. Within a week, 350,000 people signed up before the Kenyan government halted it over ethical concerns raised by activists.
Consider the recent developments in the Israeli-Palestinian conflict, where AI’s role is twofold and controversial. On the one hand, an AI tool known as ‘the Gospel’ has reportedly been used by Israel to identify targets in Gaza. This use of AI for military targeting has led to a shift in operational strategy, resulting in increased civilian casualties and the destruction of infrastructure deemed essential for civil society. This not only exacerbates the humanitarian crisis but also raises profound ethical questions about the role of AI in warfare. This system claims to reduce collateral damage and improve efficiency. However, critics raise concerns about the use of AI in warfare, emphasising that this is just the beginning of AI’s role in military operations. The Gospel, which was first used during the 11-day war in Gaza in May 2021, utilises machine learning and advanced computing to identify military targets. It plays a central role in Israel’s war in Gaza, significantly accelerating the production line of targets, which officials have likened to a “factory.” This use of AI in conflict zones underscores the growing importance of ethical considerations and transparency in military applications of artificial intelligence.
Conversely, in the digital realm, AI moderation systems used by major tech companies have been accused of silencing Palestinian voices. Human Rights Watch reports instances of Meta and TikTok removing content related to Palestine, disrupting the documentation, and sharing of human rights abuses. Such practices highlight the issues of bias and censorship embedded in AI algorithms, illustrating the critical need for ethical considerations in AI’s application in crisis communication.
The United Nations, recognizing the complex dynamics of the conflict, hired CulturePulse, an AI firm founded by F. LeRon Shults and Justin Lane, to develop an AI model to analyse the situation. The idea is a sophisticated model that analyses communication strategies by acting as a digital twin and simulating the diverse perspectives of fifteen million people in Israel and Palestinian territories. While not aimed at conflict resolution, the primary goal of this model is to provide deeper insights into the conflict’s nature and possible communication strategies, showcasing a more constructive use of AI in understanding and potentially mitigating crises. While this is innovative, it is very presumptuous and unclear what good this would achieve in a high-risk conflict situation. I am worried that this new tool will take away the humanity required in crisis communication.
At the heart of these concerns is the balance between the invaluable insights AI offers and the risk of turning personal suffering into a spectacle without consent. AI algorithms trawl through social media posts, photos, and videos, creating a rich tapestry of the human condition in times of crisis. These narratives are crucial; they inform humanitarian responses, shape public opinion, and sometimes influence policy decisions. Yet, AI risks commodifying personal tragedies without the explicit consent of the individuals whose stories are being told. The ongoing debate in this realm centres around the ethical use of AI in crisis reporting. War narratives are traditionally shaped by specific agendas and perspectives. The integration of AI technology into this domain often intensifies these biases. AI, constrained by its programming and the data it’s fed, can inadvertently reinforce these one-sided views, failing to capture the multifaceted nature of war and its impacts on diverse populations. This limitation highlights the need for critical examination and ethical considerations in the use of AI for storytelling in conflict zones. Critics argue that AI, while efficient, may lack empathy and contextual understanding, potentially leading to the misrepresentation or exploitation of personal narratives. Proponents, however, highlight AI’s ability to bring global attention to underreported crises, advocating for more nuanced and culturally aware AI systems.
As the ‘Giant of Africa’, Nigeria is not a new player to AI for good. In 2021, with the support of the World Bank, it constructed detailed poverty maps that integrated survey information with data and images from public sources like Google and Meta. These maps offer a more nuanced understanding of poverty distribution, allowing for targeted interventions where they matter most. Nigerians greeted the high-resolution poverty maps with a mix of hope, curiosity, and scepticism. The scepticism focused on the accuracy and reliability of satellite imagery. The communications minister, Bosun Tijani, has articulated an ambitious vision for AI’s role in Nigeria, aiming to leverage it for improved efficiency across various sectors, especially the humanitarian sector. However, this vision has been met with criticisms by some Nigerians who deem the focus on AI as premature, advocating instead for the prioritisation of fundamental issues like electricity, food security, and poverty alleviation.
Nigeria’s push towards embracing artificial intelligence is impeded by several obstacles, such as limited infrastructure, accessibility to data, the availability of skilled professionals, and the lack of essential financing. These same challenges also raise concerns about AI becoming a tool for exploiting the less privileged, as those with greater resources might exploit the allure of comfort and hope for personal gain, using content creation and influencer marketing to capitalise on their images and narratives for increased viewership and engagement.
The ethical tightrope is perilous – how can we harness the power of AI to tell these stories while respecting the dignity and privacy of the individuals involved?
This challenge becomes even more complex when considering the global context. The advent of 2Africa‘s connectivity brings a wave of voices from the Global South into the cyber world, a realm predominantly orchestrated by Western AI giants. The 2Africa project is a submarine cable system that aims to enhance internet connectivity along the African coastline. It involves laying undersea fiber-optic cables spanning about 37,000 kilometres and connecting 23 countries across Africa, the Middle East, and Europe.
However, this surge of digital voices teeters on the brink of a precarious precipice. In the heart of this digital expansion, the narratives of those in the Global South, untold and unheard, risk being swept away in a current of AI algorithms devoid of the cultural heartbeat and the nuanced pulse of these communities. These AI systems, sculpted in faraway lands, could inadvertently paint these stories in colours that misalign their true hues, morphing personal sagas into a public spectacle without consent or context. Consequently, there’s a risk that these technologies could misinterpret, misrepresent, or misuse personal narratives and data, deepening the digital divide and perpetuating digital exploitation.
This issue is not just about technology; it is about power dynamics. The entities that develop and control AI technologies often sit in positions of significant influence. They have the power to decide which stories get told and how. Companies like OpenAI and Google that dominate AI technology often wield considerable influence, dictating which narratives are amplified and how they are presented, potentially overshadowing authentic African voices and realities.
It is important to emphasise the digital divide exacerbated by challenges such as limited internet access, with only 28.2% of Africans connected online, and high data costs, particularly in countries like Nigeria. This digital divide not only hinders local AI development but also risks turning AI into an extractive industry, where resources and data are harvested from Africa to the benefit of foreign tech companies without returning equitable value or acknowledging local contributions and needs.
It is easy to look at the leadership of big tech companies and understand where the African problems lie. The lack of representation in tech decision-making spaces can be attributed to several factors, including the scarcity of infrastructure, the high cost of data, and the limited availability of education and training in advanced tech fields within Africa. These barriers prevent African talent from contributing to and influencing the global AI landscape. The development and application of AI technologies often do not consider the linguistic, cultural, and social nuances specific to African communities, leading to solutions that may not be fully applicable or inclusive, further widening the gap between the potential benefits of AI and its actual impact on the continent. This power imbalance raises questions about representation and voice, particularly for those in the Global South. Are our stories being told in a way that reflects our reality, or are they being co-opted to fit a narrative convenient to those in power?
Moreover, the use of AI in storytelling in crisis situations isn’t just a matter of ethical concern; it’s also a matter of accuracy and authenticity. For all their sophistication, AI systems may not fully grasp the complexity and emotional depth of human experiences. They might miss the subtleties of cultural nuances or misinterpret the context, leading to narratives that, while factually accurate, miss the essence of the human experience.
So, how do we navigate this ethical maze? The answer lies in a multi-faceted approach. First, there must be a concerted effort to develop AI technologies sensitive to cultural and contextual nuances. This requires diverse teams that bring various perspectives and understandings to the development process. Secondly, there needs to be a robust framework for consent and privacy, ensuring that the stories AI tells are shared with the permission of those whose stories are being told. Finally, and perhaps most importantly, there must be an ongoing dialogue between technologists, ethicists, storytellers, and the communities affected by crises. This dialogue can ensure that AI-woven narratives are not just technologically sophisticated but also ethically sound and authentically human.