Resources

Deep Fake Scams Growing in Global Frequency and Sophistication, Victim Warns

In an elaborate scam in January, “a finance worker, was duped into attending a video call with people he believed were the chief financial officer and other members of staff,” remembers CNN. But Hong Kong police later said that all of them turned out to be deepfake re-creations which duped the employee into transferring $25 million.
According to police, the worker had initially suspected he had received a phishing email from the company’s UK office, as it specified the need for a secret transaction to be carried out. However, the worker put aside his doubts after the video call because other people in attendance had looked and sounded just like colleagues he recognized.

Now the targeted company has been revealed: a major engineering consulting firm, with 18,500 employees across 34 offices:
A spokesperson for London-based Arup told CNN on Friday that it notified Hong Kong police in January about the fraud incident, and confirmed that fake voices and images were used. “Unfortunately, we can’t go into details at this stage as the incident is still the subject of an ongoing investigation. However, we can confirm that fake voices and images were used,” the spokesperson said in an emailed statement. “Our financial stability and business operations were not affected and none of our internal systems were compromised,” the person added…

Authorities around the world are growing increasingly concerned about the sophistication of deepfake technology and the nefarious uses it can be put to. In an internal memo seen by CNN, Arup’s East Asia regional chairman, Michael Kwok, said the “frequency and sophistication of these attacks are rapidly increasing globally, and we all have a duty to stay informed and alert about how to spot different techniques used by scammers.”

The company’s global CIO emailed CNN this statement. “Like many other businesses around the globe, our operations are subject to regular attacks, including invoice fraud, phishing scams, WhatsApp voice spoofing, and deepfakes.

“What we have seen is that the number and sophistication of these attacks has been rising sharply in recent months.”

99

UK Opposition Leader Targeted By AI-Generated Fake Audio Smear

An audio clip posted to social media on Sunday, purporting to show Britain’s opposition leader Keir Starmer verbally abusing his staff, has been debunked as being AI-generated by private-sector and British government analysis. The audio of Keir Starmer was posted on X (formerly Twitter) by a pseudonymous account on Sunday morning, the opening day of the Labour Party conference in Liverpool. The account asserted that the clip, which has now been viewed more than 1.4 million times, was genuine, and that its authenticity had been corroborated by a sound engineer.

Ben Colman, the co-founder and CEO of Reality Defender — a deepfake detection business — disputed this assessment when contacted by Recorded Future News: “We found the audio to be 75% likely manipulated based on a copy of a copy that’s been going around (a transcoding). As we don’t have the ground truth, we give a probability score (in this case 75%) and never a definitive score (‘this is fake’ or ‘this is real’), leaning much more towards ‘this is likely manipulated’ than not,” said Colman. “It is also our opinion that the creator of this file added background noise to attempt evasion of detection, but our system accounts for this as well,” he said.

143

Microsoft’s new AI can simulate anyone’s voice with 3 seconds of audio

Microsoft researchers announced a new text-to-speech AI model called VALL-E that can closely simulate a person’s voice when given a three-second audio sample. Once it learns a specific voice, VALL-E can synthesize audio of that person saying anything — and do it in a way that attempts to preserve the speaker’s emotional tone. Its creators speculate that VALL-E could be used for high-quality text-to-speech applications, speech editing where a recording of a person could be edited and changed from a text transcript (making them say something they originally didn’t), and audio content creation when combined with other generative AI models like GPT-3.

Microsoft calls VALL-E a “neural codec language model,” and it builds off of a technology called EnCodec, which Meta announced in October 2022. Unlike other text-to-speech methods that typically synthesize speech by manipulating waveforms, VALL-E generates discrete audio codec codes from text and acoustic prompts. It basically analyzes how a person sounds, breaks that information into discrete components (called “tokens”) thanks to EnCodec, and uses training data to match what it “knows” about how that voice would sound if it spoke other phrases outside of the three-second sample. Or, as Microsoft puts it in the VALL-E paper (PDF): “To synthesize personalized speech (e.g., zero-shot TTS), VALL-E generates the corresponding acoustic tokens conditioned on the acoustic tokens of the 3-second enrolled recording and the phoneme prompt, which constrain the speaker and content information respectively. Finally, the generated acoustic tokens are used to synthesize the final waveform with the corresponding neural codec decoder.”

[…] While using VALL-E to generate those results, the researchers only fed the three-second “Speaker Prompt” sample and a text string (what they wanted the voice to say) into VALL-E. So compare the “Ground Truth” sample to the “VALL-E” sample. In some cases, the two samples are very close. Some VALL-E results seem computer-generated, but others could potentially be mistaken for a human’s speech, which is the goal of the model. In addition to preserving a speaker’s vocal timbre and emotional tone, VALL-E can also imitate the “acoustic environment” of the sample audio. For example, if the sample came from a telephone call, the audio output will simulate the acoustic and frequency properties of a telephone call in its synthesized output (that’s a fancy way of saying it will sound like a telephone call, too). And Microsoft’s samples (in the “Synthesis of Diversity” section) demonstrate that VALL-E can generate variations in voice tone by changing the random seed used in the generation process.

Microsoft has not provided VALL-E code for others to experiment with, likely to avoid fueling misinformation and deception.

180

‘Deepfakes’ of Celebrities Have Begun Appearing in Ads, With or Without Their Permission

Celebrity deepfakes are coming to advertising. Among the recent entries: Last year, Russian telecommunications company MegaFon released a commercial in which a simulacrum of Hollywood legend Bruce Willis helps defuse a bomb. Just last week, Elon Musk seemed to star in a marketing video from real-estate investment startup reAlpha Tech. And last month a promotional video for machine-learning firm Paperspace showed talking semblances of the actors Tom Cruise and Leonardo DiCaprio. None of these celebrities ever spent a moment filming these campaigns. In the cases of Messrs. Musk, Cruise and DiCaprio, they never even agreed to endorse the companies in question. All the videos of digital simulations were created with so-called deepfake technology, which uses computer-generated renditions to make the Hollywood and business notables say and do things they never actually said or did.

Some of the ads are broad parodies, and the meshing of the digital to the analog in the best of cases might not fool an alert viewer. Even so, the growing adoption of deepfake software could eventually shape the industry in profound ways while creating new legal and ethical questions, experts said. Authorized deepfakes could allow marketers to feature huge stars in ads without requiring them to actually appear on-set or before cameras, bringing down costs and opening new creative possibilities. But unauthorized, they create a legal gray area: Celebrities could struggle to contain a proliferation of unauthorized digital reproductions of themselves and the manipulation of their brand and reputation, experts said.

240

Meta’s New Text-to-Video AI Generator is Like DALL-E for Video

A team of machine learning engineers from Facebook’s parent company Meta has unveiled a new system called Make-A-Video. As the name suggests, this AI model allows users to type in a rough description of a scene, and it will generate a short video matching their text. The videos are clearly artificial, with blurred subjects and distorted animation, but still represent a significant development in the field of AI content generation.

“Generative AI research is pushing creative expression forward by giving people tools to quickly and easily create new content,” said Meta in a blog post announcing the work. “With just a few words or lines of text, Make-A-Video can bring imagination to life and create one-of-a-kind videos full of vivid colors and landscapes.” In a Facebook post, Meta CEO Mark Zuckerberg described the work as “amazing progress,” adding: “It’s much harder to generate video than photos because beyond correctly generating each pixel, the system also has to predict how they’ll change over time.”

164

Seeing no longer means believing

Manipulated images, whether for entertainment or disinformation, are common on social media. But with millions of images and thousands of hours of video uploaded every day, how to sort the real from the fake?

If you use social media, the chances are you see (and forward) some of the more than 3.2 billion images and 720,000 hours of video shared daily. When faced with such a glut of content, how can we know what’s real and what’s not? While one part of the solution is an increased use of content verification tools, it’s equally important we all boost our digital media literacy. Ultimately, one of the best lines of defence — and the only one you can control — is you.

Misinformation (when you accidentally share false content) and disinformation (when you intentionally share it) in any medium can erode trust in civil institutions such as news organisations, coalitions and social movements. However, fake photos and videos are often the most potent.

For those with a vested political interest, creating, sharing and/or editing false images can distract, confuse and manipulate viewers to sow discord and uncertainty (especially in already polarised environments). Posters and platforms can also make money from the sharing of fake, sensationalist content.

Only 11-25% of journalists globally use social media content verification tools, according to the International Centre for Journalists.
Could you spot a doctored image?

Consider this photo of Martin Luther King Jr. pic.twitter.com/5W38DRaLHr This altered image clones part of the background over King Jr’s finger, so it looks like he’s flipping off the camera. It has been shared as genuine on Twitter, Reddit and white supremacist websites.

In the original 1964 photo, King flashed the “V for victory” sign after learning the US Senate had passed the civil rights bill.

Beyond adding or removing elements, there’s a whole category of photo manipulation in which images are fused together.

Earlier this year, a photo of an armed man was photoshopped by Fox News, which overlaid the man onto other scenes without disclosing the edits, the Seattle Times reported.

Similarly, the image below was shared thousands of times on social media in January, during Australia’s Black Summer bushfires. The AFP’s fact check confirmed it is not authentic and is actually a combination of several separate photos.

Fully and partially synthetic content

Online, you’ll also find sophisticated “deepfake” videos showing (usually famous) people saying or doing things they never did. Less advanced versions can be created using apps such as Zao and Reface.

A team from the Massachusetts Institute of Technology created this fake video showing US President Richard Nixon reading lines from a speech crafted in case the 1969 moon landing failed. (Youtube)

Or, if you don’t want to use your photo for a profile picture, you can default to one of several websites offering hundreds of thousands of AI-generated, photorealistic images of people.
AI-generated faces.
These people don’t exist, they’re just images generated by artificial intelligence.
Generated Photos, CC BY
Editing pixel values and the (not so) simple crop

Cropping can greatly alter the context of a photo, too.

We saw this in 2017, when a US government employee edited official pictures of Donald Trump’s inauguration to make the crowd appear bigger, according to The Guardian. The staffer cropped out the empty space “where the crowd ended” for a set of pictures for Trump.
Views of the crowds at the inaugurations of former US President Barack Obama in 2009 (left) and President Donald Trump in 2017 (right).
AP

But what about edits that only alter pixel values such as colour, saturation or contrast?

One historical example illustrates the consequences of this. In 1994, Time magazine’s cover of OJ Simpson considerably “darkened” Simpson in his police mugshot. This added fuel to a case already plagued by racial tension, to which the magazine responded:

No racial implication was intended, by Time or by the artist.

Tools for debunking digital fakery

For those of us who don’t want to be duped by visual mis/disinformation, there are tools available — although each comes with its own limitations (something we discuss in our recent paper).

Invisible digital watermarking has been proposed as a solution. However, it isn’t widespread and requires buy-in from both content publishers and distributors.

Reverse image search (such as Google’s) is often free and can be helpful for identifying earlier, potentially more authentic copies of images online. That said, it’s not foolproof because it:

relies on unedited copies of the media already being online
doesn’t search the entire web
doesn’t always allow filtering by publication time. Some reverse image search services such as TinEye support this function, but Google’s doesn’t.
returns only exact matches or near-matches, so it’s not thorough. For instance, editing an image and then flipping its orientation can fool Google into thinking it’s an entirely different one.

Most reliable tools are sophisticated

Meanwhile, manual forensic detection methods for visual mis/disinformation focus mostly on edits visible to the naked eye, or rely on examining features that aren’t included in every image (such as shadows). They’re also time-consuming, expensive and need specialised expertise.

Still, you can access work in this field by visiting sites such as Snopes.com — which has a growing repository of “fauxtography”.

Computer vision and machine learning also offer relatively advanced detection capabilities for images and videos. But they too require technical expertise to operate and understand.

Moreover, improving them involves using large volumes of “training data”, but the image repositories used for this usually don’t contain the real-world images seen in the news.

If you use an image verification tool such as the REVEAL project’s image verification assistant, you might need an expert to help interpret the results.

The good news, however, is that before turning to any of the above tools, there are some simple questions you can ask yourself to potentially figure out whether a photo or video on social media is fake. Think:

was it originally made for social media?
how widely and for how long was it circulated?
what responses did it receive?
who were the intended audiences?

465

A Deepfake Porn Bot Is Being Used to Abuse Thousands of Women

An AI tool that “removes” items of clothing from photos has targeted more than 100,000 women, some of whom appear to be under the age of 18.

The still images of nude women are generated by an AI that “removes” items of clothing from a non-nude photo. Every day the bot sends out a gallery of new images to an associated Telegram channel which has almost 25,000 subscribers. The sets of images are frequently viewed more 3,000 times. A separate Telegram channel that promotes the bot has more than 50,000 subscribers.

Some of the images produced by the bot are glitchy, but many could pass for genuine. “It is maybe the first time that we are seeing these at a massive scale,” says Giorgio Patrini, CEO and chief scientist at deepfake detection company Sensity, which conducted the research. The company is publicizing its findings in a bid to pressure services hosting the content to remove it, but it is not publicly naming the Telegram channels involved.

The actual number of women targeted by the deepfake bot is likely much higher than 104,000. Sensity was only able to count images shared publicly, and the bot gives people the option to generate photos privately. “Most of the interest for the attack is on private individuals,” Patrini says. “The very large majority of those are for people that we cannot even recognize.”

As a result, it is likely very few of the women who have been targeted know that the images exist. The bot and a number of Telegram channels linked to it are primarily Russian-language but also offer English-language translations. In a number of cases, the images created appear to contain girls who are under the age of 18, Sensity adds, saying it has no way to verify this but has informed law enforcement of their existence.

Unlike other nonconsensual explicit deepfake videos, which have racked up millions of views on porn websites, these images require no technical knowledge to create. The process is automated and can be used by anyone—it’s as simple as uploading an image to any messaging service.

602

Ad Firms Are Exploring Deepfaked Commercials

“With the pandemic having shut down production, companies are asking ad agencies to create commercials made up of digitally altered footage,” reports the New York Times, citing a State Farm commercial that aired during an ESPN documentary starring the anchor of “SportsCenter,” Kenny Mayne:

The producers made the commercial by layering video of Mr. Mayne’s 60-year-old mouth onto footage of his 38-year-old face. To many viewers, the stunt provided a welcome moment of levity in depressing times. Others were made uneasy by the smoothness of the patch, describing it as a type of deepfake. “We tried to make the joke clear enough so that we weren’t tricking anyone,” said Carrie Brzezinski-Hsu, the head of ESPN CreativeWorks, which created the commercial with the ad agencies Optimum Sports and Translation.

Ms. Brzezinski-Hsu said manipulated footage was likely to appear in future ESPN ads. And executives at several major advertising agencies said they had discussed making similar commercials with their clients in recent weeks. “We’re so restricted in how we can generate content,” said Kerry Hill, the production director for the ad agency FCB in North America. “Anything that can be computer generated is something we’re going to explore.”

Husani Oakley, the chief technology officer of the ad firm Deutsch, said digitally altered ads should somehow clue viewers into the fact that what they are seeing is not completely real. “The technology is here, and it’s only going to get better and better, and we have to get used to it,” he added. “We’re exploring ways to have fun with it.”

564

First Use of Deepfakes In an Indian Election Campaign

The Delhi Bharatiya Janata Party (BJP) has partnered with political communications firm The Ideaz Factory to create “positive campaigns” using deepfakes to reach different linguistic voter bases, reports Nilesh Christopher reports via Motherboard. It marks the debut of deepfakes in election campaigns in India.

On February 7, a day ahead of the Legislative Assembly elections in Delhi, two videos of the Bharatiya Janata Party (BJP) President Manoj Tiwari criticizing the incumbent Delhi government of Arvind Kejriwal went viral on WhatsApp. While one video had Tiwari speak in English, the other was him speaking in the Hindi dialect of Haryanvi. “[Kejriwal] cheated us on the basis of promises. But now Delhi has a chance to change it all. Press the lotus button on February 8 to form the Modi-led government,” he said. One may think that this 44-second monologue might be a part of standard political outreach, but there is one thing that’s not standard: These videos were not real. [The original video can be viewed here.]

“Deepfake technology has helped us scale campaign efforts like never before,” Neelkant Bakshi, co-incharge of social media and IT for BJP Delhi, tells VICE. “The Haryanvi videos let us convincingly approach the target audience even if the candidate didn’t speak the language of the voter.” Tiwari’s fabricated video was used widely to dissuade the large Haryanvi-speaking migrant worker population in Delhi from voting for the rival political party. According to Bakshi, these deepfakes were distributed across 5,800 WhatsApp groups in the Delhi and NCR region, reaching approximately 15 million people.

528

The Rise of the Deepfake and the threat to Democracy

Deepfakes posted on the internet in the past two years, has alarmed many observers, who believe the technology could be used to disgrace politicians and even swing elections. Democracies appear to be gravely threatened by the speed at which disinformation can be created and spread via social media, where the incentive to share the most sensationalist content outweighs the incentive to perform the tiresome work of verification.

Last month, a digitally altered video showing Nancy Pelosi, the speaker of the US House of Representatives, appearing to slur drunkenly through a speech was widely shared on Facebook and YouTube. Trump then posted the clip on Twitter with the caption: “PELOSI STAMMERS THROUGH NEWS CONFERENCE”. The video was quickly debunked, but not before it had been viewed millions of times; the president did not delete his tweet, which at the time of writing has received nearly 98,000 likes. Facebook declined to take down the clip, qualifying its decision with the statement: “Once the video was fact-checked as false, we dramatically reduced its distribution.”

In response, a team including the artists Bill Posters and Daniel Howe two weeks ago posted a video on Instagram, in which Facebook founder Mark Zuckerberg boasts that he has “total control of billions of people’s stolen data, all their secrets, their lives, their futures”.

In May 2018, a Flemish socialist party called sp.a posted a deepfake video to its Twitter and Facebook pages showing Trump appearing to taunt Belgium for remaining in the Paris climate agreement. The video, which remains on the party’s social media, is a poor forgery: Trump’s hair is curiously soft-focus, while his mouth moves with a Muppet-like elasticity. Indeed, the video concludes with Trump saying: “We all know that climate change is fake, just like this video,” although this sentence alone is not subtitled in Flemish Dutch. (The party declined to comment, but a spokesperson previously told the site Politico that it commissioned the video to “draw attention to the necessity to act on climate change”.)

But James [founder of the YouTube channel derpfakes’ that publishes deepfake videos] believes forgeries may have gone undetected. “The idea that deepfakes have already been used politically isn’t so farfetched,” he says. “It could be the case that deepfakes have already been widely used for propaganda.”

658

Women, Not Democracy, Are the Main Victims of Deepfakes

While the 2020 U.S. presidential elections have lawmakers on edge over AI-generated fake videos, a new study by Netherlands-based deepfake-detection outfit Deeptrace shows that the main victims today are women. According to Deeptrace, deepfake videos have exploded in the past year, rising from 8,000 in December 2018 to 14,678 today. And not surprisingly for the internet, nearly all of the material is pornography, which accounts for 96% of the deepfake videos it’s found online. The fake videos have been viewed 134 million times.

The numbers suggest deepfake porn is still niche but also growing quickly. Additionally, 90% of the fake content depicted women from the U.S., UK, and Canada, while 2% represented women from South Korea and 2% depicted women from Taiwan. “Deepfake pornography is a phenomenon that exclusively targets and harms women,” the company notes. That small number of non-pornographic deepfake videos it analyzed on YouTube mostly contained (61%) synthesized male subjects. According to Henry Ajder, a researcher at Deeptrace, currently most of the deepfake porn involves famous women. But he reckons the threat to all women is likely to increase as it becomes less computationally expensive to create deepfakes. As for the political threat, there actually aren’t that many cases where deepfakes have changed a political outcome.

568

New Deepfake Algorithm Allows You To Text-Edit the Words of a Speaker In a Video

It is now possible to take a talking-head style video, and add, delete or edit the speaker’s words as simply as you’d edit text in a word processor. A new deepfake algorithm can process the audio and video into a new file in which the speaker says more or less whatever you want them to. New Atlas reports:

It’s the work of a collaborative team from Stanford University, Max Planck Institute for Informatics, Princeton University and Adobe Research, who say that in a perfect world the technology would be used to cut down on expensive re-shoots when an actor gets something wrong, or a script needs to be changed. In order to learn the face movements of a speaker, the algorithm requires about 40 minutes of training video, and a transcript of what’s being said, so it’s not something that can be thrown onto a short video snippet and run if you want good results. That 40 minutes of video gives the algorithm the chance to work out exactly what face shapes the subject is making for each phonetic syllable in the original script.

From there, once you edit the script, the algorithm can then create a 3D model of the face making the new shapes required. And from there, a machine learning technique called Neural Rendering can paint the 3D model over with photo-realistic textures to make it look basically indistinguishable from the real thing. Other software such as VoCo can be used if you wish to generate the speaker’s audio as well as video, and it takes the same approach, by breaking down a heap of training audio into phonemes and then using that dataset to generate new words in a familiar voice.

705

It’s Getting Harder to Spot a Deep Fake Video

Fake videos and audio keep getting better, faster and easier to make, increasing the mind-blowing technology’s potential for harm if put in the wrong hands. Bloomberg QuickTake explains how good deep fakes have gotten in the last few months, and what’s being done to counter them.

658

Actors Are Digitally Preserving Themselves To Continue Their Careers Beyond the Grave

Improvements in CGI mean neither age nor death need stop some performers from working. From a report:

From Carrie Fisher in Rogue One: A Star Wars Story to Paul Walker in the Fast & Furious movies, dead and magically “de-aged” actors are appearing more frequently on movie screens. Sometimes they even appear on stage: next year, an Amy Winehouse hologram will be going on tour to raise money for a charity established in the late singer’s memory. Some actors and movie studios are buckling down and preparing for an inevitable future when using scanning technology to preserve 3-D digital replicas of performers is routine. Just because your star is inconveniently dead doesn’t mean your generation-spanning blockbuster franchise can’t continue to rake in the dough. Get the tech right and you can cash in on superstars and iconic characters forever.

[…]

For celebrities, these scans are a chance to make money for their families post mortem, extend their legacy — and even, in some strange way, preserve their youth. Visual-effects company Digital Domain — which has worked on major pictures like Avengers: Infinity War and Ready Player One — has also taken on individual celebrities as clients, though it hasn’t publicized the service. “We haven’t, you know, taken out any ads in newspapers to ‘Save your likeness,'” says Darren Hendler, director of the firm’s Digital Humans Group. The suite of services that the company offers actors includes a range of different scans to capture their famous faces from every conceivable angle — making it simpler to re-create them in the future. Using hundreds of custom LED lights arranged in a sphere, numerous images can be recorded in seconds capturing what the person’s face looks like lit from every angle — and right down to the pores.

625