“Decolonizing AI” has become a mantra echoed across various institutions, from academia to museums and cultural venues worldwide. As AI boosterism dominates mainstream media, shaping global public debates with either excessive praise for the technology’s capabilities or absolute terror over its potential catastrophic consequences, concerns have emerged emphasizing its tendency to reproduce colonial dynamics of exploitation and extraction. These concerns focus on both the labor force behind AI, often composed of poorly compensated workers, likely residing in the Global South, and the natural resources—ranging from water to rare metals—and energy required to build and maintain datasets, as well as to train and operate machine learning systems.
Critical decolonial scholars Nick Couldry and Ulises Mejias, have coined the term ‘data colonialism‘ to describe the process by which Big Tech grabs all sorts of personal information, including ‘affective’ data like reactions to friends’ posts and socially shared pictures and captions, to use it without consent to train and implement technologies that can track movements, profile biometrics, and discriminate against minorities and disadvantaged groups.
However, while the inner mechanism through which a new form of digital colonialism is reactivated by data-powered technologies has been largely unveiled and denounced, the strategies for counteracting it remain less clear.
What does ‘decolonizing AI’ mean in concrete terms?
I posed this question to Ameera Kawash, a Palestinian-Iraqi-American artist and researcher whose interdisciplinary projects powerfully situate her artistic practice within critical AI studies, exposing and challenging discriminatory and repressive instances in today’s tech sector. Ameera’s recent works include Rescripting Data Bodies: Black Body Radiation, a collaboration with Ghanaian-American artist Ama BE that rethinks the relationship between data and embodiment through data-driven performances inspired by West African masquerade traditions; and Future Archives, an archival and artistic intervention focusing on the impact of generative AI on Palestinian lives and narratives.
What does decolonizing AI really mean? What does it entail, and how can we implement it as a practice in real-world terms?
Decolonizing AI is a multilayered endeavor, requiring a reaction against the philosophy of ‘universal computing’—an approach that is broad, universalistic, and often overrides the local. We must counteract this with varied and localized approaches, focusing on labor, ecological impact, bodies and embodiment, feminist frameworks of consent, and the inherent violence of the digital divide. This holistic thinking should connect the military use of AI-powered technologies with their seemingly innocent, everyday applications in apps and platforms. By exploring and unveiling the inner bond between these uses, we can understand how the normalization of day-to-day AI applications sometimes legitimizes more extreme and military employment of these technologies.
There are normalized paths and routine ways to violence embedded in the very infrastructure of AI, such as the way prompts (text inputs, N.d.R.) are rendered into actual imagery. This process can contribute to dehumanizing people, making them legitimate targets by rendering them invisible.
Take Palestine as an example: when I experimented with simple prompts like “Palestinian child in a city” or “Palestinian woman walking”, the AI-generated imagery often depicted scenarios that normalize violence against Palestinians. The child is shown running from a collapsing building, with utter urban devastation in the background. Destruction is ubiquitous, yet the perpetrator of this violence, Israel, is never visually held accountable. These AI-generated images contribute to shaping a default narrative where, without context or reason, Palestinians are portrayed as living in perpetual devastation. This kind of imagery perpetuates a biased and harmful narrative, further entrenching the normalization of violence against them as a result of more dehumanization.
What I call the ‘futuricide’ of the Palestinian people stems from a complex interplay between how data is trained—by scraping the Internet on a large scale and absorbing all the existing stereotypical representations circulating on the web—and then generalizing this data, making it sort of ‘universal.’ As AI generates patterns and models, it crystallizes categories. The Palestinian city resulting from my prompts risks becoming ‘the’ Palestinian city—a quintessential, solidified entity where suffering is turned into a purely visual item that gets infinitized and commodified through generative AI in all its forms and aspects. These traumatic aftereffects occur without a visible perpetrator, resulting in an occupation without an occupier. It mirrors a horror film: pure devastation without cause or reason, just senseless violence and trauma.
If we were to dismantle the colonial foundations embedded in the creation and default structure of AI as conceived today, where should we start?
I believe we should start from very small, local instances. For example, I am working to involve real-world cultural institutions in the creation of datasets, thereby developing highly curated and customized models to train AI without scraping the internet. This approach helps resist the exploitation that typically underpins the making and training of these technologies, which is also where most biases are introduced.
Decolonizing AI means eliminating this exploitative aspect and turning towards more curated, artisanal labor and practices of care.
Of course, this approach is not scalable, and perhaps that is part of the problem. Conceiving the digital as quintessentially scalable makes it colonial, commercial, and commodified by default. It might be that decolonizing AI, as a project, is inherently unworkable—machine learning, in its current structure and conception, offers little room to decolonial practices.
However, by collaborating with real-world institutions such as universities and cultural centers to create training datasets, we can address at least one layer of the problem: data collection. There are many layers involved in making AI work, all of which should be considered when attempting to ‘decolonize’ it.
Starting with data collection is a meaningful first step, but we need to acknowledge that a comprehensive approach will require addressing each layer of the process. For example, even if the information is collected fairly, curated meticulously, and consent is given, the training model might be exploitative in itself. The act of turning data into labels and categories and universalizing them is inherently problematic and very much part of the colonial legacy. It can perpetuate biases and reinforce harmful structures, regardless of the fairness of the initial data collection.
All these question marks should be addressed critically with a holistic approach. For me, it would be useful to think about AI within the framework of critical archival practices. It is very rare to situate AI within archival practices, as we do not typically see data as an archive. Yet, it is. Data is a precious resource from the past upon which future knowledge is built. Understanding AI as an extension of archival practice allows us to critically assess how we collect, categorize, and utilize data, ensuring that we approach it with the same care, consent, and contextual awareness that we would with any other archival material. Furthermore, thinking of AI as an archive reveals that there is always a selection criteria and an organizing principle driven by choice.
To create a decolonial or anticolonial archive, we must adopt feminist perspectives of consent and care and include other forms of knowledge beyond the traditional, language-based ones. As an artist, this is integral to my daily practice—I engage with non-traditional forms of knowing and learning that are embodied and ephemeral, thus less likely to be datafied and commodified. By embracing these alternative forms of knowledge, perhaps we can resist the commodification and universalization inherent in traditional AI systems. And yet, if we were to truly decolonize AI, would it remain the same object, or would it be something entirely different?
What about the role of generative AI in spreading awareness about the genocide in Gaza? Why did the ‘All Eyes on Rafah’ synthetic picture go viral, while so many evidence-based images offering proof of the massacre have faded from public attention?
Many elements contributed to the virality of the AI-generated image ‘All Eyes on Rafah.’ Firstly, the readable text embedded within the image allowed it to bypass contemporary platform censorship, facilitating exponential sharing. Secondly, people likely perceived it as a ‘safe’ image—it is sanitized and free from explicit violence, making it more palatable for widespread dissemination.
The visuals inhabit a safe space, which is the space of AI, not Palestine. Removing the specific context creates a comfortable distance for viewers. From a Palestinian perspective, this is highly problematic as it contributes to the colonial process of dehumanizing and erasing the local population. Palestinians are redacted from the image, as if their lived experiences are not credible or do not count at all.
The messaging is also problematic: “All Eyes on Rafah”—what does it really mean? It doesn’t suggest actions or call personal agency into question. It doesn’t urge you to protest, contact your MP, or demand sanctions on Israel. It doesn’t push you to do anything concrete; it’s very passive. The whole world is looking, witnessing genocide in real-time, which might be a more sophisticated form of clicktivism. Doing the absolute minimum—just sharing an image—gives a false sense of having contributed, of having ‘done something.’
Of course, the positive aspect is that 50 million people have shared it across platforms. However, Palestinians do not want to go viral and be invisible at the same time. We need virality to work for us, to bring an end to the violence.
What would happen if these AI-powered technologies were used to affirm Palestinian futures instead of contributing to their annihilation? This question guides my practice. Technology is integral to the discourse on the future, and we Palestinians need to be part of the future. We must be involved in shaping it, not cut out from it.