Resources

In Fast-Moving Pandemic, Sources of Falsehoods Spread by Text, Email, WhatsApp and TikTok

Misleading text messages claiming that President Trump was going to announce a national quarantine buzzed into cellphones across the country over the weekend, underscoring how rapidly false claims are spreading — and how often it is happening beyond the familiar misinformation vehicles of Facebook and Twitter. The false texts spread so widely that on Sunday night the White House’s National Security Council, fearing the texts were an attempt to spook the stock market as it opened Monday, decided to directly debunk the misleading claims in a Twitter post: “Text message rumors of a national #quarantine are FAKE. There is no national lockdown.” But by then the messages already had spread widely, as had similar ones both in the United States and Europe in recent days. Text messages, encrypted communication apps such as WhatsApp and some social media platforms have carried similarly alarming misinformation, much of it with the apparent goal of spurring people to overrun stores to buy basic items ahead of a new wave of government restrictions.

The one claiming that Trump was going to impose a national quarantine included the advice: “Stock up on whatever you guys need to make sure you have a two week supply of everything. Please forward to your network.” In fact, authorities have warned against aggressive buying that could disrupt supply chains and fuel panic. Trump addressed the misleading text messages at an afternoon news conference Monday, saying, “It could be that you have some foreign groups that are playing games.” On the possibility of a national quarantine, Trump said: “We haven’t determined to do that at all. … Hopefully we won’t have to.”

661

Companies Are Using AI-Generated People To Appear More “Diverse”

AI startups are selling images of computer-generated faces that look like the real thing, offering companies a chance to create imaginary models and “increase diversity” in their ads without needing human beings. One firm is offering to sell diverse photos for marketing brochures and has already signed up clients, including a dating app that intends to use the images in a chatbot. Another company says it’s moving past AI-generated headshots and into the generation of full, fake human bodies as early as this month. The AI software used to create such faces is freely available and improving rapidly, allowing small start-ups to easily create fakes that are so convincing they can fool the human eye. The systems train on massive databases of actual faces, then attempt to replicate their features in new designs. But AI experts worry that the fakes will empower a new generation of scammers, bots and spies, who could use the photos to build imaginary online personas, mask bias in hiring and damage efforts to bring diversity to industries. The fact that such software now has a business model could also fuel a greater erosion of trust across an Internet already under assault by disinformation campaigns, “deepfake” videos and other deceptive techniques.

651

The Rise of the Deepfake and the threat to Democracy

Deepfakes posted on the internet in the past two years, has alarmed many observers, who believe the technology could be used to disgrace politicians and even swing elections. Democracies appear to be gravely threatened by the speed at which disinformation can be created and spread via social media, where the incentive to share the most sensationalist content outweighs the incentive to perform the tiresome work of verification.

Last month, a digitally altered video showing Nancy Pelosi, the speaker of the US House of Representatives, appearing to slur drunkenly through a speech was widely shared on Facebook and YouTube. Trump then posted the clip on Twitter with the caption: “PELOSI STAMMERS THROUGH NEWS CONFERENCE”. The video was quickly debunked, but not before it had been viewed millions of times; the president did not delete his tweet, which at the time of writing has received nearly 98,000 likes. Facebook declined to take down the clip, qualifying its decision with the statement: “Once the video was fact-checked as false, we dramatically reduced its distribution.”

In response, a team including the artists Bill Posters and Daniel Howe two weeks ago posted a video on Instagram, in which Facebook founder Mark Zuckerberg boasts that he has “total control of billions of people’s stolen data, all their secrets, their lives, their futures”.

In May 2018, a Flemish socialist party called sp.a posted a deepfake video to its Twitter and Facebook pages showing Trump appearing to taunt Belgium for remaining in the Paris climate agreement. The video, which remains on the party’s social media, is a poor forgery: Trump’s hair is curiously soft-focus, while his mouth moves with a Muppet-like elasticity. Indeed, the video concludes with Trump saying: “We all know that climate change is fake, just like this video,” although this sentence alone is not subtitled in Flemish Dutch. (The party declined to comment, but a spokesperson previously told the site Politico that it commissioned the video to “draw attention to the necessity to act on climate change”.)

But James [founder of the YouTube channel derpfakes’ that publishes deepfake videos] believes forgeries may have gone undetected. “The idea that deepfakes have already been used politically isn’t so farfetched,” he says. “It could be the case that deepfakes have already been widely used for propaganda.”

693

Facebook CEO Mark Zuckerberg Says He Fears ‘Erosion of Truth’ But Defends Allowing Politicians To Lie in Ads

Facebook chief executive Mark Zuckerberg said in an interview he worries “about an erosion of truth” online but defended the policy that allows politicians to peddle ads containing misrepresentations and lies on his social network, a stance that has sparked an outcry during the 2020 presidential campaign. From a report:

“People worry, and I worry deeply, too, about an erosion of truth,” Zuckerberg told The Washington Post ahead of a speech Thursday at Georgetown University. “At the same time, I don’t think people want to live in a world where you can only say things that tech companies decide are 100 percent true. And I think that those tensions are something we have to live with.” Zuckerberg’s approach to political speech has come under fire in recent weeks. Democrats have taken particular issue with Facebook’s decision to allow an ad from President Trump’s 2020 campaign that included falsehoods about former vice president Joe Biden and his son, Hunter. Sen. Elizabeth Warren responded to Facebook’s decision by running her own campaign ad, satirically stating that Zuckerberg supports Trump for re-election.

Zuckerberg framed the issue as part of a broader debate over free expression, warning about the dangers of social networks, including Facebook, “potentially cracking down too much.” He called on the U.S. to set an example for tailored regulation in contrast to other countries, including China, that censor political speech online. And Zuckerberg stressed Facebook must stand strong against governments that seek to “pull back” on free speech in the face of heightened social and political tensions. Zuckerberg’s appearance in Washington marks his most forceful attempt to articulate his vision for how governments and tech giants should approach the Web’s most intractable problems. The scale of Facebook and its affiliated apps, Instagram and WhatsApp, which make up a virtual community of billions of users, poses challenges for Zuckerberg and regulators around the world as they struggle to contain hate speech, falsehoods, violent imagery and terrorist propaganda on social media.

601

Fake fingerprints can imitate real ones in biometric systems

Researchers have used a neural network to generate artificial fingerprints that work as a “master key” for biometric identification systems and prove fake fingerprints can be created.

According to a paper presented at a security conference in Los Angeles, the artificially generated fingerprints, dubbed “DeepMasterPrints” by the researchers from New York University, were able to imitate more than one in five fingerprints in a biometric system that should only have an error rate of one in a thousand.

The researchers, led by NYU’s Philip Bontrager, say that “the underlying method is likely to have broad applications in fingerprint security as well as fingerprint synthesis.” As with much security research, demonstrating flaws in existing authentication systems is considered to be an important part of developing more secure replacements in the future.

In order to work, the DeepMasterPrints take advantage of two properties of fingerprint-based authentication systems. The first is that, for ergonomic reasons, most fingerprint readers do not read the entire finger at once, instead imaging whichever part of the finger touches the scanner.

Crucially, such systems do not blend all the partial images in order to compare the full finger against a full record; instead, they simply compare the partial scan against the partial records. That means that an attacker has to match just one of tens or hundreds of saved partial fingerprint in order to be granted access.

The second is that some features of fingerprints are more common than others. That means that a fake print that contains a lot of very common features is more likely to match with other fingerprints than pure chance would suggest.

Based on those insights, the researchers used a common machine learning technique, called a generative adversarial network, to artificially create new fingerprints that matched as many partial fingerprints as possible.

The neural network not only allowed them to create multiple fingerprint images, it also created fakes which look convincingly like a real fingerprint to a human eye – an improvement on a previous technique, which created jagged, right-angled fingerprints that would fool a scanner but not a visual inspection.

They compare the method to a “dictionary attack” against passwords, where a hacker runs a pre-generated list of common passwords against a security system.

Such attacks may not be able to break into any specific account, but when used against accounts at scale, they generate enough successes to be worth the effort.

739

Efforts grow to help students evaluate what they see online

Alarmed by the proliferation of false content online, state lawmakers [in the United States] are pushing schools to put more emphasis on teaching students how to tell fact from fiction.

Lawmakers in several states have introduced or passed bills calling on public school systems to do more to teach media literacy skills that they say are critical to democracy. The effort has been bipartisan but has received little attention despite successful legislation in Washington state, Connecticut, Rhode Island and New Mexico.

Advocates say the K-12 curriculum has not kept pace with rapid changes in technology. Studies show many children spend hours every day online but struggle to comprehend the content that comes at them.

For years, they have pushed schools to incorporate media literacy — including the ability to evaluate and analyze sources of information — into lesson plans in civics, language arts, science and other subjects.

815

An AI-Powered App Has Resulted in an Explosion of Convincing Face-Swap Porn

In December, Motherboard discovered a Redditor named ‘deepfakes’ quietly enjoying his hobby: Face-swapping celebrity faces onto porn performers’ bodies. He made several convincing porn videos of celebrities — including Gal Gadot, Maisie Williams, and Taylor Swift — using a machine learning algorithm, his home computer, publicly available videos, and some spare time. Since we first wrote about deepfakes, the practice of producing AI-assisted fake porn has exploded. More people are creating fake celebrity porn using machine learning, and the results have become increasingly convincing. A redditor even created an app specifically designed to allow users without a computer science background to create AI-assisted fake porn. All the tools one needs to make these videos are free, readily available, and accompanied with instructions that walk novices through the process.

An incredibly easy-to-use application for DIY fake videos—of sex and revenge porn, but also political speeches and whatever else you want—that moves and improves at this pace could have society-changing impacts in the ways we consume media. The combination of powerful, open-source neural network research, our rapidly eroding ability to discern truth from fake news, and the way we spread news through social media has set us up for serious consequences.

888

Adobe is working on ‘Photoshop for audio’ that will let you add words someone never said to recordings

“Adobe is working on a new piece of software that would act like a Photoshop for audio, according to Adobe developer Zeyu Jin, who spoke at the Adobe MAX conference in San Diego, California today. The software is codenamed Project VoCo, and it’s not clear at this time when it will materialize as a commercial product.

The standout feature, however, is the ability to add words not originally found in the audio file. Like Photoshop, Project VoCo is designed to be a state-of-the-art audio editing application. Beyond your standard speech editing and noise cancellation features, Project VoCo can also apparently generate new words using a speaker’s recorded voice. Essentially, the software can understand the makeup of a person’s voice and replicate it, so long as there’s about 20 minutes of recorded speech.

In Jin’s demo, the developer showcased how Project VoCo let him add a word to a sentence in a near-perfect replication of the speaker, according to Creative Bloq. So similar to how Photoshop ushered in a new era of editing and image creation, this tool could transform how audio engineers work with sound, polish clips, and clean up recordings and podcasts.”

“When recording voiceovers, dialog, and narration, people would often like to change or insert a word or a few words due to either a mistake they made or simply because they would like to change part of the narrative,” reads an official Adobe statement. “We have developed a technology called Project VoCo in which you can simply type in the word or words that you would like to change or insert into the voiceover. The algorithm does the rest and makes it sound like the original speaker said those words.”

Imagine this technology coupled with a false video manipulation component, that also already exists as a working proof. One really could make potentially convincing entirely unreal audio/video of a person’s likeness…

1064

An alarming number of people rely *solely* on a Social Media network for news

Note the stats from Pew Research Center for Journalism and Media, that 64% of users surveyed rely on just one source alone of social media for news content—i.e. Facebook, Twitter, YouTube, etc, while 26% would check only two sources, and 10% three or more: A staggeringly concerning trend, given the rampant personalisation of these screen environments and what we know about the functioning and reinforcement of The Filter Bubble. This is a centralisation of power and lack of diversity and compare/contrast that the “old media” perhaps could only dream of…

From The Huffington Post:

“It’s easy to believe you’re getting diverse perspectives when you see stories on Facebook. You’re connected not just to many of your friends, but also to friends of friends, interesting celebrities and publications you “like.”

But Facebook shows you what it thinks you’ll be interested in. The social network pays attention to what you interact with, what your friends share and comment on, and overall reactions to a piece of content, lumping all of these factors into an algorithm that serves you items you’re likely to engage with. It’s a simple matter of business: Facebook wants you coming back, so it wants to show you things you’ll enjoy.”

BBC also reported earlier this year that Social Media networks outstripped television as the news source for young people (emphasis added):

“Of the 18-to-24-year-olds surveyed, 28% cited social media as their main news source, compared with 24% for TV.

The Reuters Institute for the Study of Journalism research also suggests 51% of people with online access use social media as a news source. Facebook and other social media outlets have moved beyond being “places of news discovery” to become the place people consume their news, it suggests.

The study found Facebook was the most common source—used by 44% of all those surveyed—to watch, share and comment on news. Next came YouTube on 19%, with Twitter on 10%. Apple News accounted for 4% in the US and 3% in the UK, while messaging app Snapchat was used by just 1% or less in most countries.

According to the survey, consumers are happy to have their news selected by algorithms, with 36% saying they would like news chosen based on what they had read before and 22% happy for their news agenda to be based on what their friends had read. But 30% still wanted the human oversight of editors and other journalists in picking the news agenda and many had fears about algorithms creating news “bubbles” where people only see news from like-minded viewpoints.

Most of those surveyed said they used a smartphone to access news, with the highest levels in Sweden (69%), Korea (66%) and Switzerland (61%), and they were more likely to use social media rather than going directly to a news website or app.

The report also suggests users are noticing the original news brand behind social media content less than half of the time, something that is likely to worry traditional media outlets.”

And to exemplify the issue, these words from Slashdot: “Over the past few months, we have seen how Facebook’s Trending Topics feature is often biased, and moreover, how sometimes fake news slips through its filter.”

“The Washington Post monitored the website for over three weeks and found that Facebook is still struggling to get its algorithm right. In the six weeks since Facebook revamped its Trending system, the site has repeatedly promoted “news” stories that are actually works of fiction. As part of a larger audit of Facebook’s Trending topics, the Intersect logged every news story that trended across four accounts during the workdays from Aug. 31 to Sept. 22. During that time, we uncovered five trending stories that were indisputably fake and three that were profoundly inaccurate. On top of that, we found that news releases, blog posts from sites such as Medium and links to online stores such as iTunes regularly trended.”

UPDATE 9/11/16 — US President Barack Obama criticises Facebook for spreading fake stories: “The way campaigns have unfolded, we just start accepting crazy stuff as normal,” Obama said. “As long as it’s on Facebook, and people can see it, as long as its on social media, people start believing it, and it creates this dust cloud of nonsense.”

1459