Has Online Disinformation Splintered and Become More Intractable?
Not long ago, the fight against disinformation focused on the major social media platforms, like Facebook and Twitter. When pressed, they often removed troubling content, including misinformation and intentional disinformation about the Covid-19 pandemic. Today, however, there are dozens of new platforms, including some that pride themselves on not moderating — censoring, as they put it — untrue statements in the name of free speech….
The purveyors of disinformation have also become increasingly sophisticated at sidestepping the major platforms’ rules, while the use of video to spread false claims on YouTube, TikTok and Instagram has made them harder for automated systems to track than text…. A report last month by NewsGuard, an organization that tracks the problem online, showed that nearly 20 percent of videos presented as search results on TikTok contained false or misleading information on topics such as school shootings and Russia’s war in Ukraine. “People who do this know how to exploit the loopholes,” said Katie Harbath, a former director of public policy at Facebook who now leads Anchor Change, a strategic consultancy.
With the [U.S.] midterm elections only weeks away, the major platforms have all pledged to block, label or marginalize anything that violates company policies, including disinformation, hate speech or calls to violence. Still, the cottage industry of experts dedicated to countering disinformation — think tanks, universities and nongovernment organizations — say the industry is not doing enough. The Stern Center for Business and Human Rights at New York University warned last month, for example, that the major platforms continued to amplify “election denialism” in ways that undermined trust in the democratic system.