By Sheila Dang and Riniki Sanyal
(Reuters) – The rapid spread of misleading claims and doctored images in the aftermath of a deadly rampage by Hamas gunmen in Israel has put the focus on Elon Musk’s X platform, which has drawn the ire of the European Union.
Part of the challenge for those combating fake information online is that changes made by Musk earlier this year have made it more difficult to track the full scale of deception on X, the site formerly known as Twitter, social media researchers told Reuters.
Researchers studying the origins and proliferation of misinformation said they have lost the ability to automatically track keywords, hashtags and other information about real-time events, as X eliminated access to a data tool that was free to academics before Musk’s acquisition of the platform in October last year.
Without the tool, researchers now need to manually analyze thousands of links, said Ruslan Trad, a resident fellow at the Atlantic Council’s Digital Forensic Research Lab (DFRLab).
Asked for comment, an X representative said more than 500 unique Community Notes, a feature that lets users add context to potentially misleading content, have been posted about the Israeli-Palestinian conflict.
In a post on the social media platform on Monday, X said it removed newly created accounts affiliated with the Islamist group Hamas and had “actioned tens of thousands of posts for sharing graphic media, violent speech, and hateful conduct.” X did not disclose the actions it took on the posts, which can be removed or have their distribution reduced by the company.
One false claim that spread on X and Meta Platform’s Facebook showed a U.S. government document edited to look like approval for $8 billion in military funds to Israel, according to a report by the Reuters Fact Check team.
A Meta spokesperson said a team of experts including Hebrew and Arabic speakers were monitoring the “rapidly evolving situation in real-time.”
Others include a falsely labeled video purporting to be Hamas militants with a kidnapped child, and video from a concert by American singer Bruno Mars miscaptioned as footage from an Israeli music festival that was attacked by Hamas, according to Reuters Fact Check.
In a surprise attack on Saturday, Hamas gunmen rampaged through towns, taking captives and killing hundreds of people in the deadliest Palestinian militant attack in Israel’s history.
While disinformation has spread on all major social media platforms including Facebook and TikTok, X appeared to be the most recent to draw scrutiny from regulators.
On Tuesday, European Union Commissioner Thierry Breton warned Musk that X was spreading “illegal content and disinformation,” according to a letter Breton posted on X. The EU is home to some of the strictest internet laws in the world which require platforms to fight fake content.
Musk challenged Breton’s post and responded “Please list the violations you allude to on X, so that the public can see them.”
Under Musk, X has allowed users to pay to verify their accounts and lets certain users earn a portion of ad sales under a revenue share program. The changes now offer paid accounts the incentive to spread provocative or false claims to rack up followers, said Renee DiResta, a research manager at Stanford Internet Observatory.
“Some of these accounts (on X) appeared to have been set up recently to gain virality … and spread popular misinformation about the war,” said Jack Brewster, enterprise editor at Newsguard, which creates reliability ratings for news websites.
Musk himself recommended that X users follow two accounts that had previously spread false claims for “real-time” updates on the conflict, the Washington Post reported. The billionaire owner of the platform posted the recommendation on Sunday and later deleted the post, according to the Washington Post.
Misinformation appeared to be most prevalent on X, according to Brewster and Tamara Kharroub, deputy executive director at Arab Center Washington DC, a nonpartisan research center.
False information has also spread on messaging app Telegram and short-form video app TikTok, said DFRLab’s Trad.
A Telegram spokesperson said the company does not have the “power to verify information.” TikTok did not respond to request for comment.
Social media platforms face the challenge of straddling a line between moderating content to protect users while allowing information to spread in real time, something that has also helped the news media and investigators track civilian deaths.
Towing the line is difficult even when platforms plan months in advance for planned events like elections, said Solomon Messing, a professor at New York University’s Center for Social Media and Politics who previously worked at Twitter and Facebook.
“It’s much more difficult when there’s a surprise terrorist attack, particularly one with this much video footage,” said Messing.
Some Community Notes on X have appeared after misleading narratives were viewed by thousands of users, Kharroub said, making them less effective at correcting false information.
X said in its post on Monday that Community Notes typically appear within minutes of content posting. The company said while it may be “incredibly difficult” to see certain content, it was in the public interest to see information in real time.
A YouTube spokesperson said some violent or graphic content may be allowed if it provides sufficient news or documentary value about the conflict, adding the company prohibits content that promotes violent organizations, including video filmed by Hamas. Like other online platforms, YouTube has moderation employees and technology to remove content that violates its rules.
Snap, owner of messaging app Snapchat, said its map feature, which lets users view public posts from anywhere in the world, will remain available in the region with teams monitoring for misinformation and content that incites violence.
(Reporting by Sheila Dang in Dallas and Riniki Sanyal in Bangalore, editing by Deepa Babington)