Resources

Has Online Disinformation Splintered and Become More Intractable?

Not long ago, the fight against disinformation focused on the major social media platforms, like Facebook and Twitter. When pressed, they often removed troubling content, including misinformation and intentional disinformation about the Covid-19 pandemic. Today, however, there are dozens of new platforms, including some that pride themselves on not moderating — censoring, as they put it — untrue statements in the name of free speech….

The purveyors of disinformation have also become increasingly sophisticated at sidestepping the major platforms’ rules, while the use of video to spread false claims on YouTube, TikTok and Instagram has made them harder for automated systems to track than text…. A report last month by NewsGuard, an organization that tracks the problem online, showed that nearly 20 percent of videos presented as search results on TikTok contained false or misleading information on topics such as school shootings and Russia’s war in Ukraine. “People who do this know how to exploit the loopholes,” said Katie Harbath, a former director of public policy at Facebook who now leads Anchor Change, a strategic consultancy.

With the [U.S.] midterm elections only weeks away, the major platforms have all pledged to block, label or marginalize anything that violates company policies, including disinformation, hate speech or calls to violence. Still, the cottage industry of experts dedicated to countering disinformation — think tanks, universities and nongovernment organizations — say the industry is not doing enough. The Stern Center for Business and Human Rights at New York University warned last month, for example, that the major platforms continued to amplify “election denialism” in ways that undermined trust in the democratic system.

248

Meta’s New Text-to-Video AI Generator is Like DALL-E for Video

A team of machine learning engineers from Facebook’s parent company Meta has unveiled a new system called Make-A-Video. As the name suggests, this AI model allows users to type in a rough description of a scene, and it will generate a short video matching their text. The videos are clearly artificial, with blurred subjects and distorted animation, but still represent a significant development in the field of AI content generation.

“Generative AI research is pushing creative expression forward by giving people tools to quickly and easily create new content,” said Meta in a blog post announcing the work. “With just a few words or lines of text, Make-A-Video can bring imagination to life and create one-of-a-kind videos full of vivid colors and landscapes.” In a Facebook post, Meta CEO Mark Zuckerberg described the work as “amazing progress,” adding: “It’s much harder to generate video than photos because beyond correctly generating each pixel, the system also has to predict how they’ll change over time.”

180

How Facebook and Google fund global misinformation

MIT’s Technology Review shares data from a Facebook-run tool called CrowdTangle. It shows that by 2018 in the nation of Myanmar (population: 53 million), ” All the engagement had instead gone to fake news and clickbait websites.

“In a country where Facebook is synonymous with the internet, the low-grade content overwhelmed other information sources.”
[T]he sheer volume of fake news and clickbait acted like fuel on the flames of already dangerously high ethnic and religious tensions. It shifted public opinion and escalated the conflict, which ultimately led to the death of 10,000 Rohingya, by conservative estimates, and the displacement of 700,000 more. In 2018, a United Nations investigation determined that the violence against the Rohingya constituted a genocide and that Facebook had played a “determining role” in the atrocities. Months later, Facebook admitted it hadn’t done enough “to help prevent our platform from being used to foment division and incite offline violence.” Over the last few weeks, the revelations from the Facebook Papers, a collection of internal documents provided to Congress and a consortium of news organizations by whistleblower Frances Haugen, have reaffirmed what civil society groups have been saying for years: Facebook’s algorithmic amplification of inflammatory content, combined with its failure to prioritize content moderation outside the US and Europe, has fueled the spread of hate speech and misinformation, dangerously destabilizing countries around the world.

But there’s a crucial piece missing from the story. Facebook isn’t just amplifying misinformation.

The company is also funding it.

An MIT Technology Review investigation, based on expert interviews, data analyses, and documents that were not included in the Facebook Papers, has found that Facebook and Google are paying millions of ad dollars to bankroll clickbait actors, fueling the deterioration of information ecosystems around the world.

Facebook pays them for permission to open their content within Facebook’s app (where Facebook controls the advertising) rather than having users clickthrough to the publisher’s own web site, reports Technology Review:
Early on, Facebook performed little quality control on the types of publishers joining the program. The platform’s design also didn’t sufficiently penalize users for posting identical content across Facebook pages — in fact, it rewarded the behavior. Posting the same article on multiple pages could as much as double the number of users who clicked on it and generated ad revenue. Clickbait farms around the world seized on this flaw as a strategy — one they still use today… Clickbait actors cropped up in Myanmar overnight. With the right recipe for producing engaging and evocative content, they could generate thousands of U.S. dollars a month in ad revenue, or 10 times the average monthly salary — paid to them directly by Facebook. An internal company document, first reported by MIT Technology Review in October, shows that Facebook was aware of the problem as early as 2019… At one point, as many as 60% of the domains enrolled in Instant Articles were using the spammy writing tactics employed by clickbait farms, the report said…

75% of users who were exposed to clickbait content from farms run in Macedonia and Kosovo had never followed any of the pages. Facebook’s content-recommendation system had instead pushed it into their news feeds.

Technology Review notes that Facebook now pays billions of dollars to the publishers in their program. It’s a long and detailed article, which ultimately concludes that the problem “is now happening on a global scale.”
Thousands of clickbait operations have sprung up, primarily in countries where Facebook’s payouts provide a larger and steadier source of income than other forms of available work. Some are teams of people while others are individuals, abetted by cheap automated tools that help them create and distribute articles at mass scale…

Google is also culpable. Its AdSense program fueled the Macedonia- and Kosovo-based farms that targeted American audiences in the lead-up to the 2016 presidential election. And it’s AdSense that is incentivizing new clickbait actors on YouTube to post outrageous content and viral misinformation.

Reached for comment, a Facebook spokesperson told Technology Review that they’d misunderstood the issue. And the spokesperson also said “we’ve invested in building new expert-driven and scalable solutions to these complex issues for many years, and will continue doing so.”

Google’s spokesperson confirmed examples in the article violated their own policies and removed the content, adding “We work hard to protect viewers from clickbait or misleading content across our platforms and have invested heavily in systems that are designed to elevate authoritative information.”

437

Seeing no longer means believing

Manipulated images, whether for entertainment or disinformation, are common on social media. But with millions of images and thousands of hours of video uploaded every day, how to sort the real from the fake?

If you use social media, the chances are you see (and forward) some of the more than 3.2 billion images and 720,000 hours of video shared daily. When faced with such a glut of content, how can we know what’s real and what’s not? While one part of the solution is an increased use of content verification tools, it’s equally important we all boost our digital media literacy. Ultimately, one of the best lines of defence — and the only one you can control — is you.

Misinformation (when you accidentally share false content) and disinformation (when you intentionally share it) in any medium can erode trust in civil institutions such as news organisations, coalitions and social movements. However, fake photos and videos are often the most potent.

For those with a vested political interest, creating, sharing and/or editing false images can distract, confuse and manipulate viewers to sow discord and uncertainty (especially in already polarised environments). Posters and platforms can also make money from the sharing of fake, sensationalist content.

Only 11-25% of journalists globally use social media content verification tools, according to the International Centre for Journalists.
Could you spot a doctored image?

Consider this photo of Martin Luther King Jr. pic.twitter.com/5W38DRaLHr This altered image clones part of the background over King Jr’s finger, so it looks like he’s flipping off the camera. It has been shared as genuine on Twitter, Reddit and white supremacist websites.

In the original 1964 photo, King flashed the “V for victory” sign after learning the US Senate had passed the civil rights bill.

Beyond adding or removing elements, there’s a whole category of photo manipulation in which images are fused together.

Earlier this year, a photo of an armed man was photoshopped by Fox News, which overlaid the man onto other scenes without disclosing the edits, the Seattle Times reported.

Similarly, the image below was shared thousands of times on social media in January, during Australia’s Black Summer bushfires. The AFP’s fact check confirmed it is not authentic and is actually a combination of several separate photos.

Fully and partially synthetic content

Online, you’ll also find sophisticated “deepfake” videos showing (usually famous) people saying or doing things they never did. Less advanced versions can be created using apps such as Zao and Reface.

A team from the Massachusetts Institute of Technology created this fake video showing US President Richard Nixon reading lines from a speech crafted in case the 1969 moon landing failed. (Youtube)

Or, if you don’t want to use your photo for a profile picture, you can default to one of several websites offering hundreds of thousands of AI-generated, photorealistic images of people.
AI-generated faces.
These people don’t exist, they’re just images generated by artificial intelligence.
Generated Photos, CC BY
Editing pixel values and the (not so) simple crop

Cropping can greatly alter the context of a photo, too.

We saw this in 2017, when a US government employee edited official pictures of Donald Trump’s inauguration to make the crowd appear bigger, according to The Guardian. The staffer cropped out the empty space “where the crowd ended” for a set of pictures for Trump.
Views of the crowds at the inaugurations of former US President Barack Obama in 2009 (left) and President Donald Trump in 2017 (right).
AP

But what about edits that only alter pixel values such as colour, saturation or contrast?

One historical example illustrates the consequences of this. In 1994, Time magazine’s cover of OJ Simpson considerably “darkened” Simpson in his police mugshot. This added fuel to a case already plagued by racial tension, to which the magazine responded:

No racial implication was intended, by Time or by the artist.

Tools for debunking digital fakery

For those of us who don’t want to be duped by visual mis/disinformation, there are tools available — although each comes with its own limitations (something we discuss in our recent paper).

Invisible digital watermarking has been proposed as a solution. However, it isn’t widespread and requires buy-in from both content publishers and distributors.

Reverse image search (such as Google’s) is often free and can be helpful for identifying earlier, potentially more authentic copies of images online. That said, it’s not foolproof because it:

relies on unedited copies of the media already being online
doesn’t search the entire web
doesn’t always allow filtering by publication time. Some reverse image search services such as TinEye support this function, but Google’s doesn’t.
returns only exact matches or near-matches, so it’s not thorough. For instance, editing an image and then flipping its orientation can fool Google into thinking it’s an entirely different one.

Most reliable tools are sophisticated

Meanwhile, manual forensic detection methods for visual mis/disinformation focus mostly on edits visible to the naked eye, or rely on examining features that aren’t included in every image (such as shadows). They’re also time-consuming, expensive and need specialised expertise.

Still, you can access work in this field by visiting sites such as Snopes.com — which has a growing repository of “fauxtography”.

Computer vision and machine learning also offer relatively advanced detection capabilities for images and videos. But they too require technical expertise to operate and understand.

Moreover, improving them involves using large volumes of “training data”, but the image repositories used for this usually don’t contain the real-world images seen in the news.

If you use an image verification tool such as the REVEAL project’s image verification assistant, you might need an expert to help interpret the results.

The good news, however, is that before turning to any of the above tools, there are some simple questions you can ask yourself to potentially figure out whether a photo or video on social media is fake. Think:

was it originally made for social media?
how widely and for how long was it circulated?
what responses did it receive?
who were the intended audiences?

503

Why Teens Are Falling for TikTok Conspiracy Theories

TikTok–the platform skews young—reportedly one-third of its daily users in the US are 14 or younger—and celebrity gossip has long been the lingua franca of social media for people of all ages. Right-wing conspiracy groups like QAnon have been spreading made up stories about those in power on networks like Facebook for years. Now those ideas have jumped to TikTok where they’re being metabolized by much younger consumers. Those things all scan. What doesn’t, however, is why teens believe them.

The short answer? TikTok is full of crazy ideas—conspiracies are no different. They’ve been normalized by the platform where many young people spend most of their time. “Many of these conspiracy sites and stories are entertaining. They are social gathering spots. They are exciting,” says Nancy Rosenblum, Senator Joseph S. Clark Professor of Ethics in Politics and Government at Harvard University and co-author of A Lot of People Are Saying: The New Conspiracism and the Assault on Democracy. “It’s small wonder that teenagers who ‘live on the screen’ would be drawn to the drama.”

Easy access to social media’s redistribution tools worsens this problem. With every like, share, send, and retweet, teenagers are popularizing this content worldwide. “On social media, repetition substitutes for validation,” says Russel Muirhead, a professor of democracy and politics at Dartmouth College and Rosenblum’s co-author. “Repetition is what breathes air into conspiracy theories, and social media is all about repetition. But repeating something that’s false does not make it more true! Teenagers are just as vulnerable to this as grown ups.”

This wouldn’t be such a problem if teenagers weren’t so attached to social media. So fond, in fact, that some 54 percent of teens get the bulk of their news from it. If this sounds concerning, that’s because it is. With teenagers relying on TikTok as their sole source of information, it makes sense for my generation to become absorbed in webs of falsities and to live as largely uninformed citizens.

610

Long Before Cambridge Analytica, Simulmatics Linked Data and Politics

NPR reporter Shannon Bond reports of a little-known — and now nearly entirely forgotten — company called Simulmatics, which had technology that used vast amounts of data to profile voters and ultimately help John F. Kennedy win the 1960 election. From the report:
The […] company was called Simulmatics, the subject of Harvard historian and New Yorker writer Jill Lepore’s timely new book, If Then: How the Simulmatics Corporation Invented the Future. Before Cambridge Analytica, before Facebook, before the Internet, there was Simulmatics’ “People Machine,” in Lepore’s telling: “A computer program designed to predict and manipulate human behavior, all sorts of human behavior, from buying a dishwasher to countering an insurgency to casting a vote.”

Lepore unearths Simulmatics’ story and makes the argument that, amid a broader proliferation of behavioral science research across academia and government in the 1960s, the company paved the way for our 21st-century obsession with data and prediction. Simulmatics, she argues, is “a missing link in the history of technology,” the antecedent to Facebook, Google and Amazon and to algorithms that attempt to forecast who will commit crimes or get good grades. “It lurks behind the screen of every device,” she writes.

If Then presents Simulmatics as both ahead of its time and, more often than not, overpromising and under-delivering. The company was the brainchild of Ed Greenfield, an advertising executive straight out of Mad Men, who believed computers could help Democrats recapture the White House. He wanted to create a model of the voting population that could tell you how voters would respond to whatever a candidate did or said. The name Simulmatics was a contraction of “simulation” and “automation.” As Greenfield explained it to investors, Lepore writes: “The Company proposes to engage principally in estimating probable human behavior by the use of computer technology.” The People Machine was originally built to analyze huge amounts of data ahead of the 1960 election, in what Lepore describes as, at the time, “the largest political science research project in American history.”

536

US Teens Are Being Paid to Spread Disinformation on Social Media

The Washington Post covered “a sprawling yet secretive campaign that experts say evades the guardrails put in place by social media companies to limit online disinformation of the sort used by Russia” during America’s last presidential campaign in 2016.

According to four people with knowledge of the effort, “Teenagers, some of them minors, are being paid to pump out the messages…”
The campaign draws on the spam-like behavior of bots and trolls, with the same or similar language posted repeatedly across social media. But it is carried out, at least in part, by humans paid to use their own accounts, though nowhere disclosing their relationship with Turning Point Action or the digital firm brought in to oversee the day-to-day activity. One user included a link to Turning Point USA’s website in his Twitter profile until The Washington Post began asking questions about the activity. In response to questions from The Post, Twitter on Tuesday suspended at least 20 accounts involved in the activity for “platform manipulation and spam.” Facebook also removed a number of accounts as part of what the company said is an ongoing investigation…

The months-long effort by the tax-exempt nonprofit is among the most ambitious domestic influence campaigns uncovered this election cycle, said experts tracking the evolution of deceptive online tactics. “In 2016, there were Macedonian teenagers interfering in the election by running a troll farm and writing salacious articles for money,” said Graham Brookie, director of the Atlantic Council’s Digital Forensic Research Lab. “In this election, the troll farm is in Phoenix….”

The messages — some of them false and some simply partisan — were parceled out in precise increments as directed by the effort’s leaders, according to the people with knowledge of the highly coordinated activity, most of whom spoke on the condition of anonymity to protect the privacy of minors carrying out the work… The messages have appeared mainly as replies to news articles about politics and public health posted on social media. They seek to cast doubt on the integrity of the electoral process, asserting that Democrats are using mail balloting to steal the election — “thwarting the will of the American people,” they alleged. The posts also play down the threat from covid-19, which claimed the life of Turning Point’s co-founder Bill Montgomery in July…

By seeking to rebut mainstream news articles, the operation illustrates the extent to which some online political activism is designed to discredit the media. While Facebook and Twitter have pledged to crack down on what they have labeled coordinated inauthentic behavior, in Facebook’s case, and platform manipulation and spam, as Twitter defines its rules, their efforts falter in the face of organizations willing to pay users to post on their own accounts, maintaining the appearance of independence and authenticity.

One parent even said their two teenagers had been posting the messages since June as “independent contractors” — while being paid less than minimum wage.

547

From Climate Change to the Dangers of Smoking: How Powerful Interests Made Us Doubt Everything

BBC News reports:
In 1991, the trade body that represents electrical companies in the U.S., the Edison Electric Institute, created a campaign called the Information Council for the Environment which aimed to “Reposition global warming as theory (not fact)”. Some details of the campaign were leaked to the New York Times. “They ran advertising campaigns designed to undermine public support, cherry picking the data to say, ‘Well if the world is warming up, why is Kentucky getting colder?’ They asked rhetorical questions designed to create confusion, to create doubt,” argued Naomi Oreskes, professor of the history of science at Harvard University and co-author of Merchants of Doubt. But back in the 1990 there were many campaigns like this…

Most of the organisations opposing or denying climate change science were right-wing think tanks, who tended to be passionately anti-regulation. These groups made convenient allies for the oil industry, as they would argue against action on climate change on ideological grounds. Jerry Taylor spent 23 years with the Cato Institute — one of those right wing think tanks — latterly as vice president. Before he left in 2014, he would regularly appear on TV and radio, insisting that the science of climate change was uncertain and there was no need to act.

Now, he realises his arguments were based on a misinterpretation of the science, and he regrets the impact he’s had on the debate.

Harvard historian Naomi Oreskes discovered leading climate-change skeptics had also been prominent skeptics on the dangers of cigarette smoking. “That was a Eureka moment,” Oreskes tells BBC News. “We realised this was not a scientific debate.”

Decades before the energy industry tried to undermine the case for climate change, tobacco companies had used the same techniques to challenge the emerging links between smoking and lung cancer in the 1950s… As a later document by tobacco company Brown and Williamson summarised the approach: “Doubt is our product, since it is the best means of competing with the ‘body of fact’ that exists in the minds of the general public.” Naomi Oreskes says this understanding of the power of doubt is vital. “They realise they can’t win this battle by making a false claim that sooner or later would be exposed. But if they can create doubt, that would be sufficient — because if people are confused about the issue, there’s a good chance they’ll just keep smoking….”

Academics like David Michaels, author of The Triumph of Doubt, fear the use of uncertainty in the past to confuse the public and undermine science has contributed to a dangerous erosion of trust in facts and experts across the globe today, far beyond climate science or the dangers of tobacco. He cites public attitudes to modern issues like the safety of 5G, vaccinations — and coronavirus.

“By cynically manipulating and distorting scientific evidence, the manufacturers of doubt have seeded in much of the public a cynicism about science, making it far more difficult to convince people that science provides useful — in some cases, vitally important — information.

587

Facebook and Google Serve As Vectors For Misinformation While Hobbling Local Journalism and Collecting Taxpayer Subsidies, Group Says

Facebook and Google are hollowing out local communities by serving as vectors for misinformation while hobbling local journalism and collecting taxpayer subsidies, a new paper from progressive think tank the American Economic Liberties Project charges. Both companies cite benefits their platforms offer small businesses as a key defense against critiques of their size and power. The paper, dated Aug. 30, is sure to presage further scrutiny of the impact they’ve had on local communities.

The brief, by Pat Garofalo, the group’s director of state and local policy, argues that: Google doesn’t do enough to protect against fraud, allowing scammers to get their own numbers and websites listed on Google to the detriment of legitimate businesses. Facebook, by design, boosts shoddy and sensationalist content, crowding out legitimate local news and information, all as it and Google have come to dominate the local advertising market that was long the lifeblood of community journalism. Both have sucked up potentially billions in local taxpayer dollars via tax breaks as well as subsidies and discounts on utilities they’ve gotten in exchange for building data centers. Garofalo recommends remedies including more antitrust enforcement at the federal and state levels and an end to preferential treatment by states and localities, either voluntarily or under force of law.

543

Study: US Adults Who Mostly Rely On Social Media For News Are Less Informed, Exposed To More Conspiracy Theories

According to a new report from Pew Research, U.S. adults who get their news largely from social media platforms tend to follow the news less closely and end up less informed on several key subjects when compared to those who use other sources, like TV, radio, and news publications.

The firm first asked people how they most commonly get their news. About one-in-five (18%) said they mostly use social media to stay current. That’s close the percentages of those who say they use local TV (16%) or cable TV (16%) news, but fewer than those who say they go directly to a news website or app (25%). Another 13% said they use network TV and only 3% said they read a newspaper. To be clear, any study that asks users to self-report how they do something isn’t going to be as useful as those that collect hard data on what the consumers actually do. In other words, people who think they’re getting most of their news from TV may be, in reality, undercounting the time they spent on social media â” or vice versa.

That said, among this group of “primarily” social media news consumers, only 8% said they were following the key news story of the 2020 U.S. election “every closely,” compared with 37% of cable TV viewers who said the same, or the 33% of print users who also said this. The social media group, on this topic, was closer to the local TV group (11%). On the topic of the Coronavirus outbreak, only around a quarter (23%) of the primarily social media news consumers said they were following news of COVID-19 “very closely.” All other groups again reported a higher percentage, including those who primarily used cable TV (50%), national network TV (50%), news websites and apps (44%), and local TV (32%) for news.

Related to this finding, the survey respondents were also asked 29 different fact-based questions about news topics from recent days, including those on Trump’s impeachment, the COVID-19 outbreak, and others. Those who scored the lowest on these topics were the consumers who said they primarily used social media to get their news. Across 9 questions related to foundational political knowledge, only 17% of primarily social media news consumers scored “high political knowledge,” meaning they got 8 to 9 of the questions right. 27% scored “middle political knowledge” (6-7 right) and 57% scored “low political knowledge” (5 or fewer right.) The only group that did worse were those who primarily relied on local TV. 45% of who got their news from news primarily via websites and apps, meanwhile, had “high political knowledge,” compared with 42% for radio, 41% for print, 35% for cable TV, and 29% for network TV. The social media group of news consumers was also more exposed to fringe conspiracies, like the idea that the pandemic was intentionally planned.

572

Renée DiResta: The Lunatics are Running the Asylum

526