A macabre video that spread across social media platforms last week showed a young girl being set on fire by a mob, which sharers claimed was the doing of Hamas. But the footage was, in fact, from 2015 in Guatemala, long before the Palestinian group’s attack on Israel.
It was just one falsehood during a week in which often violent misinformation flooded social apps, causing confusion and stoking outrage as the conflict unfolded. Platforms such as Elon Musk’s X, Telegram and TikTok drew the ire of regulators for failing to stop the deluge of misleading information, which quickly spilled into mainstream media and real world politics.
Many widely-shared posts in this new information battleground, including viral claims that Qatar threatened to cut off gas exports, are provably false. But others inhabit a grey area alongside evidence of proven atrocities.
Grisly allegations that Hamas “beheaded babies”, for instance, made their way on to tabloid front pages and even into a Joe Biden speech. But the White House later acknowledged that the president had no independent corroboration of the claim. Israel released images of babies killed and burnt by Hamas, but this included no evidence of infant beheadings.
Jean-Claude Goldenstein, chief executive of business intelligence group CREOpoint which focuses on disinformation, said his research had found a “100x explosion” in the number of viral claims about the Israel-Hamas conflict that fact-checkers found to be false since the weekend before last, compared with the rest of 2023.
“Online lies are skyrocketing, leading to intense emotions across multiple time zones, with huge global and social implications,” he said. “The size and spread is without precedent.”
The falsehoods are not just shaping public opinion, but potentially the calculations of protagonists in the war. One Hamas official who spoke to the Financial Times approvingly cited a report on mass IDF desertions from Israel’s Channel 10. The report was not only fake — Channel 10 has not broadcast since 2019.
X, formerly Twitter, now faces an EU investigation into how it is handling illegal content and misinformation. China-owned TikTok and Mark Zuckerberg’s Meta have received warnings from Brussels.
On top of this, officials have raised concerns about the use of the platforms to encourage violence and threats. On Friday, New York attorney-general Letitia James sent letters to Google, Meta, X, TikTok, Reddit and video platform Rumble, asking what steps they had taken to “stop the spread of hateful content encouraging violence against Jewish and Muslim people and institutions”.
TikTok on Sunday said it was removing content that mocks victims of attacks or incites violence and adding restrictions on its live broadcasts.
For years, social media platforms have wrangled over how to tackle fake news and misleading information, which proliferates after the outbreak of conflicts such as Russia’s full-scale invasion of Ukraine.
But researchers say a unique landscape has been created for information warfare, one in which out-of-context or doctored imagery of wartime horrors goes instantly viral. This has been supercharged by users’ hunger for real-time updates and the fraught nature of the Israel-Palestine conflict.
Algorithms often promote the most provocative content. A lack of moderation guardrails on platforms such as X and Telegram and other changes have made it harder than ever for academics and analysts to gather data and track the flow of information.
“It’s a perfect storm,” said Gordon Pennycook, associate professor of psychology at Cornell University, who studies misinformation. He cites “the loadedness of the issue” and “vested interests” as contributing factors.
Amid growing distrust of mainstream media and social pressure to declare a stance or show solidarity, some users have unwittingly shared misinformation. Pop star Justin Bieber posted a photograph, later deleted, on Instagram of a destroyed city with the words: “Praying for Israel”. The picture actually showed Gaza. In others cases, footage of military combat is taken from completely different conflicts or even from video games.
Messaging app Telegram has emerged as an information hub on the ground — and a key communications tool for Iran-backed militant groups such as Lebanon’s Hizbollah. According to Arieh Kovler, a Jerusalem-based political analyst and independent researcher, many Israelis follow Telegram channels with official-sounding names that are quick to share videos without context or speculation and rumours that are not vetted for accuracy.
A report by the Atlantic Council’s Digital Forensic Research Lab found that Hamas relied on Telegram as its “primary means of communication” for disseminating statements to supporters. The Telegram channel for the group’s military wing, Al-Qassam Brigades, had tripled in size from prewar levels, surpassing 619,000 subscribers, the report said. Abu Obaida, the Brigades’ spokesperson, has more than 400,000 subscribers to his channel.
Pro-Hamas accounts have seeded misinformation to stoke fears. Shortly after the attacks began, they disseminated videos that falsely claimed to show the Israeli army evacuating bases near Gaza and lsraeli generals being captured, according to Goldenstein.
“It’s disinformation on every side,” said Kathleen Carley, a researcher in Carnegie Mellon University’s CyLab Security and Privacy Institute. “There’s third party agendas as well. In some ways it’s being used by some countries in the Middle East to promote their country or [criticise] their adversaries.”
Andrew Borene, executive director at Flashpoint National Security Solutions, a cybersecurity company, said he expected a “real escalation” in disinformation. He said his analysts had tracked chatter in dark web forums among cyber groups and hacktivists indicating they plan to join the fray. He noted that while Iran, one of the biggest cyber players, had not been directly linked to the attacks, it was expected to continue its support of Hamas.
Meta, which has been criticised for failing to adequately police its content, said on Friday that Hamas remained banned from its platform under its “dangerous organisations and individuals” policy, as did “praise or substantive support” for the group. It added that it had established a special operations centre and removed hundreds of thousands of pieces of content that breached its rules.
For the platforms with a free speech bent — X and Telegram — ideals have been tested and the threat of regulatory penalties now looms. After the EU announced its investigation into X, a leaner operation in the wake of Musk’s takeover, it moved to take down content and suspend bad actors, including removing “newly created Hamas-affiliated accounts”.
Kovler questioned whether Telegram would take action. He noted that the Dubai-headquartered group eventually closed channels used by terrorist group Isis, as well as channels for far-right extremists in the wake of January 6 2021 riots in Washington.
Telegram said in a statement that it was “evaluating the best approaches and . . . soliciting input from a wide range of third parties”, adding that it wanted to be “careful not to exacerbate the already dire situation by any rush actions”.
As technologies such as artificial intelligence make it quicker and easier for misinformation to spread, the platforms need to invest in more moderation resources, including labelling and fact-checking as well as language capabilities, some experts say.
For now, researchers at the fact-checking and disinformation hubs that have been set up to track fake information say their efforts are being hampered by platforms’ moves to charge more for researchers to access their data or introduce other restrictions.
“We could have told you last year how much was being spread by bots on X, but this year we can’t afford to do it,” says Carley. “Any of the NGOs or think-tanks [in the space], they’ve all been crippled.”
Additional reporting by Raya Jalabi in Beirut and Samer Al-Atrush in Dubai
Read the full article here