People Are Disinformation’s Biggest Problem, Not AI, Experts Say

The public’s distrust of institutions and lack of literacy in spotting fake images, videos and audio complicates efforts to combat AI disinformation

(Bloomberg) — Lawmakers, fact-checking organizations and some tech companies are working to combat the threat of a new wave of AI-generated  disinformation online, but experts say these efforts are undermined by the public’s distrust of institutions and a general lack of literacy in spotting fake images, videos and audio clips online.

“Social media and human beings have made it so that even when we come in, fact check and say, ‘nope, this is fake,’ people say, ‘I don’t care what you say, this conforms to my worldview,’” said Hany Farid, an expert in deepfake analysis and a professor at the University of California, Berkeley.

“Why are we living in that world where reality seems to be so hard to grip?” he said. “It’s because our politicians, our media outlets and the internet have stoked distrust.”

Farid was speaking on the first episode of a new season of the Bloomberg Originals series AI IRL.

Experts have warned of the potential for artificial intelligence to accelerate the spread of disinformation for years. However, the pressure to do something about it increased notably this year after the introduction of a new crop of powerful generative AI tools that make it cheap and easy to produce visuals and text. In the US, there are fears that AI-generated disinformation could impact the 2024 US presidential election. Meanwhile, in Europe,  the biggest social media platforms are required  under a new law to fight the spread of disinformation on their platforms.

So far, the reach and influence of AI-generated disinformation remains unclear, but there is cause for concern. Bloomberg reported last week that misleading AI-generated deepfake voices of politicians were being circulated online days ahead of a narrowly contested vote in Slovakia. Some politicians in the US and Germany have also shared AI-generated images. 

Rumman Chowdhury, a fellow at the Berkman Klein Center for Internet & Society at Harvard University and previously a director at X, the company formerly known as Twitter, agreed human fallibility is part of the problem in combatting disinformation.

“You can have bots, you can have malicious actors,” she said, “but actually a very big percent of the information online that’s fake is often shared by people who didn’t know any better.”

Chowdhury said internet users are generally savvier at spotting fake text posts thanks to years of being confronted with suspicious emails and social media posts. But as AI makes more realistic fake images, audio and video possible, “there is this level of education that people need.” 

“If we see a video that looks real — for example, a bomb hitting the Pentagon — most of us will believe it,” said said. “If we were to see a post and someone said, ‘Hey, a bomb just hit the Pentagon,’ we are actually more likely to be skeptical of that because we’ve been trained more on text than video and images.”

Watch the full episode of AI IRL now or catch up on all previous episodes.

More stories like this are available on

©2023 Bloomberg L.P.