Israeli public relations (PR) campaigns and influence operations are nothing new, but this time, they seem to be more effective. This is what Alessandro Accorsi, Senior Analyst at the International Crisis Group, told UntoldMag.
It has become even more relevant after a large Israeli influence operation targeting the US and Canada was revealed at the beginning of June. STOIC, a political marketing firm with its headquarters in Tel Aviv, has been hired by the Israeli Ministry of Diaspora affairs, in order to boost, through the creation of fake accounts and lobbying journalists, an account called “United Citizens for Canada”.
The account was spreading disinformation about Muslims in the country. Marc Owen Jones, a professor at Hamad bin Khalifa University in Qatar, and the Digital Forensic Research Lab already denounced the fake accounts and their activities in February and March 2024. At the beginning of June Meta and OpenAI published reports which identified those accounts and terminated them.
Are these campaigns a new phenomenon? And in what ways do they differ from similar PR and propaganda efforts in previous conflicts?
These campaigns are not totally new. Israel started investing in information warfare in the aftermath of the Lebanon war in 2006. During the 2008 Gaza war, for example, the IDF opened a YouTube channel to post drone footage in order to justify targeting civilian buildings. The ‘Pallywood’ narrative (a fusion between Palestine and Hollywood referring to the exaggeration of Palestinian suffering and the supposed fake news relating to the number of casualties) has been circulating since at least 2012.
In 2014, Instagram was used to promote what has been dubbed ‘Digital militarism’. However, in previous conflicts Israel’s PR campaigns struggled to justify the violence on the ground documented by Palestinian journalists and citizen journalists. Israel lost the digital battle and was forced to cease hostilities.
This time, particularly during the first three months of the conflict, Israel’s information warfare has been a tactical success in supporting their military campaign.
Two main factors contributed to this. First, the structure of social media has changed. Today’s social media spaces are highly polarized, and users heavily rely on their confirmation biases. Israel was able to segment audiences and make every fact contested. For every investigation into an Israeli airstrike, for every protest or political event, there is an alternative framing of events. There is no pretense of competing for the political center. Israel forced audiences to take sides – each in their polarized echo chamber – or to be confused by conflicting narratives and stop paying attention altogether.
Pro-Palestinian and pro-Israeli audiences effectively live in two alternate realities that don’t understand each other. This is also reflected in the traditional media’s struggle to cover the war.
Second, the 7 October attacks served as the cornerstone on which to build every other narrative. By amplifying the violence of 7 October, Israel was able to tell a one-dimensional story of a multidimensional conflict, one that left out its decades of occupation, blockade of Gaza, and systematic violence against the Palestinians. The combination of these two factors allowed Israel to control much of the conversation.
Is this hiring of commercial campaigns a common practice? Did Israel engage with these activities only out of desperation after Gaza, or is it older? And does Israel in general harbor more of these companies? If yes, why?
Previous Netanyahu governments worked closely with NGOs and PR companies to decentralize the campaign against the Boycott Divestment and Sanctions (BDS) movement. The details and extent of these deals are largely shrouded in secrecy, but it’s well established that the now-defunct Ministry of Strategic Affairs contracted organizations to produce allegations against Palestinian NGOs, and networks of influencers to amplify these claims on social media.
We also know that the Israeli for-hire disinformation industry has been booming in the past few years, and that its services might have been contracted by individuals, pressure groups, or foreign governments for operations in the Horn and in sub-Saharan Africa.
Since 7 October, some Israeli tech companies have been enlisted in the war efforts, either voluntarily or by being contracted. For example, a company called Akooda uses generative AI to run astroturfing campaigns. These campaigns are designed to look grassroot and authentic, they are not formally breaching social media platforms’ rules.
Pro-Israeli users on social media can sign up and the Akooda software generates talking points or replies to specific posts for them. The IDF also admitted to running campaigns targeting the Israeli public.
However, the case of STOIC marks the first documented example of the Israeli government contracting a for-hire disinformation company that used inauthentic accounts and websites. The campaign itself wasn’t particularly sophisticated, which is why we might have been able to detect it quickly. It was probably contracted out to avoid attribution given the far-right tones of the campaign and the fact that it targeted policymakers.
In a recent article you wrote on Foreign Policy, you describe the elaborated and diverse nature of the Israeli propaganda efforts put into action after October 7. Do you think we reached another level of propaganda strategies, or are we simply assisting old strategies, just with more effort? Is there something really new?
The context is different, but the techniques remain largely the same. In the past, Israel’s pursuit of a military doctrine of maximum damage was at odds with their information warfare strategy. It is the different context – the structure of social media, the October 7 attacks – that made the strategy click this time.
Israel could do things with more intensity than in the past, in an environment that did not effectively punish violations. For instance, X (formerly Twitter) removed checks on misinformation, with its algorithm favoring content that triggers strong emotions. Meta struggled to implement its own policies, overmoderating pro-Palestinian content, while failing to detect and moderate content in Hebrew. Israel leveraged this lack of checks, even turning misinformation directed against Israel itself to its advantage. Social media today works less as a marketplace of ideas and more as a space where groups can lock down their base and discredit opponents. You don’t need to win over TikTokers if users tend to be more pro-Palestinian. You can just discredit and isolate that demographic bubble.
What about new elements that are emerging, like OSINT (Open Source Intelligence), or AI?
Israel has been efficient in reducing the flow of information coming in and out of Gaza. Palestinian journalists have faced a significant delegitimization campaign, while international journalists are not allowed independent access to the Strip. Moreover, internet shutdowns, restrictions on fuel, and the physical targeting of communication infrastructure reduced the amount of evidence available from the ground. In this context, the work of OSINT researchers has become both more complicated and more crucial. Visual investigations have been the primary means of ensuring some accountability and shedding light on the worst episodes of this war. However, there has been a proliferation of ‘disinfluencer’ accounts who spread disinformation. By mimicking the OSINT look and feel, these accounts aim to defuse confidence in the visual investigations conducted by organizations like Bellingcat, Forensic Architecture, or the visual investigations teams at large traditional media outlets such as the New York Times and the Washington Post.
Disinfluencer accounts were instrumental in reinforcing confirmation biases and to present alternative framing of events. Some of their claims informed the reporting of media outlets that are not equipped with their own visual investigation teams.
Regarding AI, its impact has been limited, but the fear of AI has been weaponized. Generative AI excels at depicting things that either don’t exist or can’t be visualized. Despite all the limitations to the flow of information, there were still plenty of images coming out of Gaza. The rather limited AI-generated images that have circulated were very obviously manipulated. The purpose of these images was not to deceive, but to generate disbelief. In a way they went to reinforce the ‘Pallywood’ or ‘the October 7 attacks were a false flag operation’ narratives.
For example, a doctored photo of a Palestinian man with three arms holding a child was used to say that all images coming out of Gaza were fake and manipulated. While generative text might have been more effective in fueling astroturfing or disinformation campaigns similar to those run by STOIC, there is little evidence so far to assess their impact.
Did Israel win or lose the information war?
It depends on one’s definition of victory. From a traditional hasbara point of view, Israel is losing the information war. It is evident that Israel is hemorrhaging political capital in this war, with long term consequences. Public discourse has never been so critical of Israel’s behavior and actions. Students are mobilized in protests, new solidarity links are emerging. Israel is losing support among large demographics – including some that have been traditionally more pro-Israeli. A generation of journalists and online personalities are challenging established narratives. Visual investigation journalists are asserting themselves in newsrooms that have traditionally relied on press releases and statements from the IDF when they had no direct access.
However, if we look at the Israeli government’s objectives through a more limited and short term perspective, where political survivability was prioritized over long term security, they achieved a tactical victory.
Israel has largely managed to withstand the international public opinion backlash and mobilize enough core support to navigate difficult political turning points. The Israeli government did not try to win over the political center. Instead, it tried to eliminate the political center, reinforcing a binary vision of the conflict that allows them to stay in power and continue military operations against Hamas, whether in an open conflict or a prolonged shadow war.
The information warfare served – and partly still serves – the goal of creating enough space to delay a ceasefire, both domestically and internationally.