Resources

Ad Firms Are Exploring Deepfaked Commercials

“With the pandemic having shut down production, companies are asking ad agencies to create commercials made up of digitally altered footage,” reports the New York Times, citing a State Farm commercial that aired during an ESPN documentary starring the anchor of “SportsCenter,” Kenny Mayne:

The producers made the commercial by layering video of Mr. Mayne’s 60-year-old mouth onto footage of his 38-year-old face. To many viewers, the stunt provided a welcome moment of levity in depressing times. Others were made uneasy by the smoothness of the patch, describing it as a type of deepfake. “We tried to make the joke clear enough so that we weren’t tricking anyone,” said Carrie Brzezinski-Hsu, the head of ESPN CreativeWorks, which created the commercial with the ad agencies Optimum Sports and Translation.

Ms. Brzezinski-Hsu said manipulated footage was likely to appear in future ESPN ads. And executives at several major advertising agencies said they had discussed making similar commercials with their clients in recent weeks. “We’re so restricted in how we can generate content,” said Kerry Hill, the production director for the ad agency FCB in North America. “Anything that can be computer generated is something we’re going to explore.”

Husani Oakley, the chief technology officer of the ad firm Deutsch, said digitally altered ads should somehow clue viewers into the fact that what they are seeing is not completely real. “The technology is here, and it’s only going to get better and better, and we have to get used to it,” he added. “We’re exploring ways to have fun with it.”

First Use of Deepfakes In an Indian Election Campaign

The Delhi Bharatiya Janata Party (BJP) has partnered with political communications firm The Ideaz Factory to create “positive campaigns” using deepfakes to reach different linguistic voter bases, reports Nilesh Christopher reports via Motherboard. It marks the debut of deepfakes in election campaigns in India.

On February 7, a day ahead of the Legislative Assembly elections in Delhi, two videos of the Bharatiya Janata Party (BJP) President Manoj Tiwari criticizing the incumbent Delhi government of Arvind Kejriwal went viral on WhatsApp. While one video had Tiwari speak in English, the other was him speaking in the Hindi dialect of Haryanvi. “[Kejriwal] cheated us on the basis of promises. But now Delhi has a chance to change it all. Press the lotus button on February 8 to form the Modi-led government,” he said. One may think that this 44-second monologue might be a part of standard political outreach, but there is one thing that’s not standard: These videos were not real. [The original video can be viewed here.]

“Deepfake technology has helped us scale campaign efforts like never before,” Neelkant Bakshi, co-incharge of social media and IT for BJP Delhi, tells VICE. “The Haryanvi videos let us convincingly approach the target audience even if the candidate didn’t speak the language of the voter.” Tiwari’s fabricated video was used widely to dissuade the large Haryanvi-speaking migrant worker population in Delhi from voting for the rival political party. According to Bakshi, these deepfakes were distributed across 5,800 WhatsApp groups in the Delhi and NCR region, reaching approximately 15 million people.

The Rise of the Deepfake and the threat to Democracy

Deepfakes posted on the internet in the past two years, has alarmed many observers, who believe the technology could be used to disgrace politicians and even swing elections. Democracies appear to be gravely threatened by the speed at which disinformation can be created and spread via social media, where the incentive to share the most sensationalist content outweighs the incentive to perform the tiresome work of verification.

Last month, a digitally altered video showing Nancy Pelosi, the speaker of the US House of Representatives, appearing to slur drunkenly through a speech was widely shared on Facebook and YouTube. Trump then posted the clip on Twitter with the caption: “PELOSI STAMMERS THROUGH NEWS CONFERENCE”. The video was quickly debunked, but not before it had been viewed millions of times; the president did not delete his tweet, which at the time of writing has received nearly 98,000 likes. Facebook declined to take down the clip, qualifying its decision with the statement: “Once the video was fact-checked as false, we dramatically reduced its distribution.”

In response, a team including the artists Bill Posters and Daniel Howe two weeks ago posted a video on Instagram, in which Facebook founder Mark Zuckerberg boasts that he has “total control of billions of people’s stolen data, all their secrets, their lives, their futures”.

In May 2018, a Flemish socialist party called sp.a posted a deepfake video to its Twitter and Facebook pages showing Trump appearing to taunt Belgium for remaining in the Paris climate agreement. The video, which remains on the party’s social media, is a poor forgery: Trump’s hair is curiously soft-focus, while his mouth moves with a Muppet-like elasticity. Indeed, the video concludes with Trump saying: “We all know that climate change is fake, just like this video,” although this sentence alone is not subtitled in Flemish Dutch. (The party declined to comment, but a spokesperson previously told the site Politico that it commissioned the video to “draw attention to the necessity to act on climate change”.)

But James [founder of the YouTube channel derpfakes’ that publishes deepfake videos] believes forgeries may have gone undetected. “The idea that deepfakes have already been used politically isn’t so farfetched,” he says. “It could be the case that deepfakes have already been widely used for propaganda.”

Women, Not Democracy, Are the Main Victims of Deepfakes

While the 2020 U.S. presidential elections have lawmakers on edge over AI-generated fake videos, a new study by Netherlands-based deepfake-detection outfit Deeptrace shows that the main victims today are women. According to Deeptrace, deepfake videos have exploded in the past year, rising from 8,000 in December 2018 to 14,678 today. And not surprisingly for the internet, nearly all of the material is pornography, which accounts for 96% of the deepfake videos it’s found online. The fake videos have been viewed 134 million times.

The numbers suggest deepfake porn is still niche but also growing quickly. Additionally, 90% of the fake content depicted women from the U.S., UK, and Canada, while 2% represented women from South Korea and 2% depicted women from Taiwan. “Deepfake pornography is a phenomenon that exclusively targets and harms women,” the company notes. That small number of non-pornographic deepfake videos it analyzed on YouTube mostly contained (61%) synthesized male subjects. According to Henry Ajder, a researcher at Deeptrace, currently most of the deepfake porn involves famous women. But he reckons the threat to all women is likely to increase as it becomes less computationally expensive to create deepfakes. As for the political threat, there actually aren’t that many cases where deepfakes have changed a political outcome.

New Deepfake Algorithm Allows You To Text-Edit the Words of a Speaker In a Video

It is now possible to take a talking-head style video, and add, delete or edit the speaker’s words as simply as you’d edit text in a word processor. A new deepfake algorithm can process the audio and video into a new file in which the speaker says more or less whatever you want them to. New Atlas reports:

It’s the work of a collaborative team from Stanford University, Max Planck Institute for Informatics, Princeton University and Adobe Research, who say that in a perfect world the technology would be used to cut down on expensive re-shoots when an actor gets something wrong, or a script needs to be changed. In order to learn the face movements of a speaker, the algorithm requires about 40 minutes of training video, and a transcript of what’s being said, so it’s not something that can be thrown onto a short video snippet and run if you want good results. That 40 minutes of video gives the algorithm the chance to work out exactly what face shapes the subject is making for each phonetic syllable in the original script.

From there, once you edit the script, the algorithm can then create a 3D model of the face making the new shapes required. And from there, a machine learning technique called Neural Rendering can paint the 3D model over with photo-realistic textures to make it look basically indistinguishable from the real thing. Other software such as VoCo can be used if you wish to generate the speaker’s audio as well as video, and it takes the same approach, by breaking down a heap of training audio into phonemes and then using that dataset to generate new words in a familiar voice.

It’s Getting Harder to Spot a Deep Fake Video

Fake videos and audio keep getting better, faster and easier to make, increasing the mind-blowing technology’s potential for harm if put in the wrong hands. Bloomberg QuickTake explains how good deep fakes have gotten in the last few months, and what’s being done to counter them.

Actors Are Digitally Preserving Themselves To Continue Their Careers Beyond the Grave

Improvements in CGI mean neither age nor death need stop some performers from working. From a report:

From Carrie Fisher in Rogue One: A Star Wars Story to Paul Walker in the Fast & Furious movies, dead and magically “de-aged” actors are appearing more frequently on movie screens. Sometimes they even appear on stage: next year, an Amy Winehouse hologram will be going on tour to raise money for a charity established in the late singer’s memory. Some actors and movie studios are buckling down and preparing for an inevitable future when using scanning technology to preserve 3-D digital replicas of performers is routine. Just because your star is inconveniently dead doesn’t mean your generation-spanning blockbuster franchise can’t continue to rake in the dough. Get the tech right and you can cash in on superstars and iconic characters forever.

[…]

For celebrities, these scans are a chance to make money for their families post mortem, extend their legacy — and even, in some strange way, preserve their youth. Visual-effects company Digital Domain — which has worked on major pictures like Avengers: Infinity War and Ready Player One — has also taken on individual celebrities as clients, though it hasn’t publicized the service. “We haven’t, you know, taken out any ads in newspapers to ‘Save your likeness,'” says Darren Hendler, director of the firm’s Digital Humans Group. The suite of services that the company offers actors includes a range of different scans to capture their famous faces from every conceivable angle — making it simpler to re-create them in the future. Using hundreds of custom LED lights arranged in a sphere, numerous images can be recorded in seconds capturing what the person’s face looks like lit from every angle — and right down to the pores.