Resources

YouTube Stars Were Offered Money to Spread Vaccine Misinformation

“A mysterious marketing agency secretly offered to pay social media stars to spread disinformation about Covid-19 vaccines,” reports the BBC.

“Their plan failed when the influencers went public about the attempt to recruit them.”
An influencer marketing agency called Fazze offered to pay [Mirko Drotschmann, a German YouTuber and journalist] to promote what it said was leaked information that suggested the death rate among people who had the Pfizer vaccine was almost three times that of the AstraZeneca jab. The information provided wasn’t true. It quickly became apparent to Mirko that he was being asked to spread disinformation to undermine public confidence in vaccines in the middle of a pandemic. “I was shocked,” says Mirko “then I was curious, what’s behind all that?” In France, science YouTuber Léo Grasset received a similar offer. The agency offered him 2000 euros if he would take part.

Fazze said it was acting for a client who wished to remain anonymous…

Both Léo and Mirko were appalled by the false claims. They pretended to be interested in order to try to find out more and were provided with detailed instructions about what they should say in their videos. In stilted English, the brief instructed them to “Act like you have the passion and interest in this topic.” It told them not to mention the video had a sponsor — and instead pretend they were spontaneously giving advice out of concern for their viewers… Since Léo and Mirko blew the whistle at least four other influencers in France and Germany have gone public to reveal they also rejected Fazze’s attempts to recruit them.

But German journalist, Daniel Laufer, has identified two influencers who may have taken up the offer.

But who’s behind the mysterious influencer marketing agency?
Fazze is a part of AdNow, which is a digital marketing company, registered in both Russia and the UK. The BBC has made multiple attempts to contact AdNow by phone, email and even a letter couriered to their Moscow headquarters, but they have not responded. Eventually we managed to contact Ewan Tolladay, one of two directors of the British arm of AdNow – who lives in Durham. Mr. Tolladay said he had very little to do with Fazze — which he said was a joint venture between his fellow director — a Russian man called Stanislav Fesenko — and another person whose identity he didn’t know… Both the French and German authorities have launched investigations into Fazze’s approaches to influencers. But the identity of the agency’s mystery client remains unclear.

There has been speculation about the Russian connections to this scandal and the interests of the Russian state in promoting its own vaccine — Sputnik V.

French YouTuber Léo Grasset believes we’ll see more attempts to manipulate public opinion, especially young people — apparently because it’s incredibly easy.

“Just spend the same money on TikTok creators, YouTube creators,” they tell the BBC. “The whole ecosystem is perfectly built for maximum efficiency of disinformation right now.”

Police Are Telling ShotSpotter To Alter Evidence From Gunshot-Detecting AI

On May 31 last year, 25-year-old Safarain Herring was shot in the head and dropped off at St. Bernard Hospital in Chicago by a man named Michael Williams. He died two days later. Chicago police eventually arrested the 64-year-old Williams and charged him with murder (Williams maintains that Herring was hit in a drive-by shooting). A key piece of evidence in the case is video surveillance footage showing Williams’ car stopped on the 6300 block of South Stony Island Avenue at 11:46 p.m. – the time and location where police say they know Herring was shot. How did they know that’s where the shooting happened? Police said ShotSpotter, a surveillance system that uses hidden microphone sensors to detect the sound and location of gunshots, generated an alert for that time and place. Except that’s not entirely true, according to recent court filings.

That night, 19 ShotSpotter sensors detected a percussive sound at 11:46 p.m. and determined the location to be 5700 South Lake Shore Drive – a mile away from the site where prosecutors say Williams committed the murder, according to a motion filed by Williams’ public defender. The company’s algorithms initially classified the sound as a firework. That weekend had seen widespread protests in Chicago in response to George Floyd’s murder, and some of those protesting lit fireworks. But after the 11:46 p.m. alert came in, a ShotSpotter analyst manually overrode the algorithms and “reclassified” the sound as a gunshot. Then, months later and after “post-processing,” another ShotSpotter analyst changed the alert’s coordinates to a location on South Stony Island Drive near where Williams’ car was seen on camera. “Through this human-involved method, the ShotSpotter output in this case was dramatically transformed from data that did not support criminal charges of any kind to data that now forms the centerpiece of the prosecution’s murder case against Mr. Williams,” the public defender wrote in the motion.

The document is what’s known as a Frye motion – a request for a judge to examine and rule on whether a particular forensic method is scientifically valid enough to be entered as evidence. Rather than defend ShotSpotter’s technology and its employees’ actions in a Frye hearing, the prosecutors withdrew all ShotSpotter evidence against Williams. The case isn’t an anomaly, and the pattern it represents could have huge ramifications for ShotSpotter in Chicago, where the technology generates an average of 21,000 alerts each year. The technology is also currently in use in more than 100 cities. Motherboard’s review of court documents from the Williams case and other trials in Chicago and New York State, including testimony from ShotSpotter’s favored expert witness, suggests that the company’s analysts frequently modify alerts at the request of police departments – some of which appear to be grasping for evidence that supports their narrative of events.

Seeing no longer means believing

Manipulated images, whether for entertainment or disinformation, are common on social media. But with millions of images and thousands of hours of video uploaded every day, how to sort the real from the fake?

If you use social media, the chances are you see (and forward) some of the more than 3.2 billion images and 720,000 hours of video shared daily. When faced with such a glut of content, how can we know what’s real and what’s not? While one part of the solution is an increased use of content verification tools, it’s equally important we all boost our digital media literacy. Ultimately, one of the best lines of defence — and the only one you can control — is you.

Misinformation (when you accidentally share false content) and disinformation (when you intentionally share it) in any medium can erode trust in civil institutions such as news organisations, coalitions and social movements. However, fake photos and videos are often the most potent.

For those with a vested political interest, creating, sharing and/or editing false images can distract, confuse and manipulate viewers to sow discord and uncertainty (especially in already polarised environments). Posters and platforms can also make money from the sharing of fake, sensationalist content.

Only 11-25% of journalists globally use social media content verification tools, according to the International Centre for Journalists.
Could you spot a doctored image?

Consider this photo of Martin Luther King Jr. pic.twitter.com/5W38DRaLHr This altered image clones part of the background over King Jr’s finger, so it looks like he’s flipping off the camera. It has been shared as genuine on Twitter, Reddit and white supremacist websites.

In the original 1964 photo, King flashed the “V for victory” sign after learning the US Senate had passed the civil rights bill.

Beyond adding or removing elements, there’s a whole category of photo manipulation in which images are fused together.

Earlier this year, a photo of an armed man was photoshopped by Fox News, which overlaid the man onto other scenes without disclosing the edits, the Seattle Times reported.

Similarly, the image below was shared thousands of times on social media in January, during Australia’s Black Summer bushfires. The AFP’s fact check confirmed it is not authentic and is actually a combination of several separate photos.

Fully and partially synthetic content

Online, you’ll also find sophisticated “deepfake” videos showing (usually famous) people saying or doing things they never did. Less advanced versions can be created using apps such as Zao and Reface.

A team from the Massachusetts Institute of Technology created this fake video showing US President Richard Nixon reading lines from a speech crafted in case the 1969 moon landing failed. (Youtube)

Or, if you don’t want to use your photo for a profile picture, you can default to one of several websites offering hundreds of thousands of AI-generated, photorealistic images of people.
AI-generated faces.
These people don’t exist, they’re just images generated by artificial intelligence.
Generated Photos, CC BY
Editing pixel values and the (not so) simple crop

Cropping can greatly alter the context of a photo, too.

We saw this in 2017, when a US government employee edited official pictures of Donald Trump’s inauguration to make the crowd appear bigger, according to The Guardian. The staffer cropped out the empty space “where the crowd ended” for a set of pictures for Trump.
Views of the crowds at the inaugurations of former US President Barack Obama in 2009 (left) and President Donald Trump in 2017 (right).
AP

But what about edits that only alter pixel values such as colour, saturation or contrast?

One historical example illustrates the consequences of this. In 1994, Time magazine’s cover of OJ Simpson considerably “darkened” Simpson in his police mugshot. This added fuel to a case already plagued by racial tension, to which the magazine responded:

No racial implication was intended, by Time or by the artist.

Tools for debunking digital fakery

For those of us who don’t want to be duped by visual mis/disinformation, there are tools available — although each comes with its own limitations (something we discuss in our recent paper).

Invisible digital watermarking has been proposed as a solution. However, it isn’t widespread and requires buy-in from both content publishers and distributors.

Reverse image search (such as Google’s) is often free and can be helpful for identifying earlier, potentially more authentic copies of images online. That said, it’s not foolproof because it:

relies on unedited copies of the media already being online
doesn’t search the entire web
doesn’t always allow filtering by publication time. Some reverse image search services such as TinEye support this function, but Google’s doesn’t.
returns only exact matches or near-matches, so it’s not thorough. For instance, editing an image and then flipping its orientation can fool Google into thinking it’s an entirely different one.

Most reliable tools are sophisticated

Meanwhile, manual forensic detection methods for visual mis/disinformation focus mostly on edits visible to the naked eye, or rely on examining features that aren’t included in every image (such as shadows). They’re also time-consuming, expensive and need specialised expertise.

Still, you can access work in this field by visiting sites such as Snopes.com — which has a growing repository of “fauxtography”.

Computer vision and machine learning also offer relatively advanced detection capabilities for images and videos. But they too require technical expertise to operate and understand.

Moreover, improving them involves using large volumes of “training data”, but the image repositories used for this usually don’t contain the real-world images seen in the news.

If you use an image verification tool such as the REVEAL project’s image verification assistant, you might need an expert to help interpret the results.

The good news, however, is that before turning to any of the above tools, there are some simple questions you can ask yourself to potentially figure out whether a photo or video on social media is fake. Think:

was it originally made for social media?
how widely and for how long was it circulated?
what responses did it receive?
who were the intended audiences?

Study: US Adults Who Mostly Rely On Social Media For News Are Less Informed, Exposed To More Conspiracy Theories

According to a new report from Pew Research, U.S. adults who get their news largely from social media platforms tend to follow the news less closely and end up less informed on several key subjects when compared to those who use other sources, like TV, radio, and news publications.

The firm first asked people how they most commonly get their news. About one-in-five (18%) said they mostly use social media to stay current. That’s close the percentages of those who say they use local TV (16%) or cable TV (16%) news, but fewer than those who say they go directly to a news website or app (25%). Another 13% said they use network TV and only 3% said they read a newspaper. To be clear, any study that asks users to self-report how they do something isn’t going to be as useful as those that collect hard data on what the consumers actually do. In other words, people who think they’re getting most of their news from TV may be, in reality, undercounting the time they spent on social media â” or vice versa.

That said, among this group of “primarily” social media news consumers, only 8% said they were following the key news story of the 2020 U.S. election “every closely,” compared with 37% of cable TV viewers who said the same, or the 33% of print users who also said this. The social media group, on this topic, was closer to the local TV group (11%). On the topic of the Coronavirus outbreak, only around a quarter (23%) of the primarily social media news consumers said they were following news of COVID-19 “very closely.” All other groups again reported a higher percentage, including those who primarily used cable TV (50%), national network TV (50%), news websites and apps (44%), and local TV (32%) for news.

Related to this finding, the survey respondents were also asked 29 different fact-based questions about news topics from recent days, including those on Trump’s impeachment, the COVID-19 outbreak, and others. Those who scored the lowest on these topics were the consumers who said they primarily used social media to get their news. Across 9 questions related to foundational political knowledge, only 17% of primarily social media news consumers scored “high political knowledge,” meaning they got 8 to 9 of the questions right. 27% scored “middle political knowledge” (6-7 right) and 57% scored “low political knowledge” (5 or fewer right.) The only group that did worse were those who primarily relied on local TV. 45% of who got their news from news primarily via websites and apps, meanwhile, had “high political knowledge,” compared with 42% for radio, 41% for print, 35% for cable TV, and 29% for network TV. The social media group of news consumers was also more exposed to fringe conspiracies, like the idea that the pandemic was intentionally planned.

How Google Ruined the Internet

Remember that story about the Polish dentist who pulled out all of her ex-boyfriend’s teeth in an act of revenge? It was complete and utter bullshit. 100% fabricated. No one knows who wrote it. Nevertheless, it was picked up by Fox News, the Los Angeles Times and many other publishers. That was eight years ago, yet when I search now for “dentist pulled ex boyfriends teeth,” I get a featured snippet that quotes ABC News’ original, uncorrected story. Who invented the fidget spinner? Ask Google Assistant and it will tell you that Catherine Hettinger did: a conclusion based on poorly-reported stories from The Guardian, The New York Times and other major news outlets. Bloomberg’s Joshua Brustein clearly demonstrated that Ms. Hettinger did not invent the low friction toy. Nevertheless, ask Google Assistant “who really invented the fidget spinner?” and you’ll get the same answer: Catherine Hettinger.

In 1998, the velocity of information was slow and the cost of publishing it was high (even on the web). Google leveraged those realities to make the best information retrieval system in the world. Today, information is free, plentiful and fast moving; somewhat by design, Google has become a card catalog that is constantly being reordered by an angry, misinformed mob. The web was supposed to forcefully challenge our opinions and push back, like a personal trainer who doesn’t care how tired you say you are. Instead, Google has become like the pampering robots in WALL-E, giving us what we want at the expense of what we need. But, it’s not our bodies that are turning into mush: It’s our minds.

40% of Anti-Vaccine Group’s Funding Came From Wealthy ‘Alternative Health’ Vendor

The nation’s oldest anti-vaccine advocacy group often emphasizes that it is supported primarily by small donations and concerned parents, describing its founder as the leader of a “national, grass roots movement.” But over the past decade a single donor has contributed more than $2.9 million to the National Vaccine Information Center, accounting for about 40 percent of the organization’s funding, according to the most recent available tax records.

That donor, osteopathic physician Joseph Mercola, has amassed a fortune selling natural health products, court records show, including vitamin supplements, some of which he claims are alternatives to vaccines.

In recent years, the center has been at the forefront of a movement that has led some parents to forgo or delay immunizing their children against vaccine-preventable diseases such as measles. Health officials say falling vaccination rates contributed to the infectious virus sickening more than 1,200 people in the United States this year, the largest number in more than 25 years. Measles outbreaks are surging worldwide, including in Samoa — where nearly 80 people have died since mid-October, the great majority of them young children and infants… The group claimed credit this year for helping to defeat legislation in a dozen states that would have made it harder for parents to opt out of vaccinating their children…

Mercola, whose claims about other products have drawn warnings from regulators, has also given at least $4 million to several groups that echo the anti-vaccine message. His net worth, derived largely from his network of private companies, has grown to “in excess of $100 million,” he said in a 2017 affidavit.

In 2010 Mercola’s site and the anti-vaccination group “launched a website that tracks vaccine-related legislation in every state. The site provides activists with detailed information, including how to sign up for public comment to support or oppose legislation in their state, where to park to attend a public hearing and what color T-shirt to wear to rallies…”

“In 2016, in response to a complaint from the Federal Trade Commission, Mercola refunded nearly $2.6 million to more than 1,300 people who bought tanning beds that he claimed could reduce the risk of skin cancer.”

Renée DiResta: The Lunatics are Running the Asylum