Resources

New Deepfake Algorithm Allows You To Text-Edit the Words of a Speaker In a Video

It is now possible to take a talking-head style video, and add, delete or edit the speaker’s words as simply as you’d edit text in a word processor. A new deepfake algorithm can process the audio and video into a new file in which the speaker says more or less whatever you want them to. New Atlas reports:

It’s the work of a collaborative team from Stanford University, Max Planck Institute for Informatics, Princeton University and Adobe Research, who say that in a perfect world the technology would be used to cut down on expensive re-shoots when an actor gets something wrong, or a script needs to be changed. In order to learn the face movements of a speaker, the algorithm requires about 40 minutes of training video, and a transcript of what’s being said, so it’s not something that can be thrown onto a short video snippet and run if you want good results. That 40 minutes of video gives the algorithm the chance to work out exactly what face shapes the subject is making for each phonetic syllable in the original script.

From there, once you edit the script, the algorithm can then create a 3D model of the face making the new shapes required. And from there, a machine learning technique called Neural Rendering can paint the 3D model over with photo-realistic textures to make it look basically indistinguishable from the real thing. Other software such as VoCo can be used if you wish to generate the speaker’s audio as well as video, and it takes the same approach, by breaking down a heap of training audio into phonemes and then using that dataset to generate new words in a familiar voice.

Google Is Teaching Children How to Act Online. Is It the Best Role Model?

Google is on a mission to teach children how to be safe online. That is the message behind “Be Internet Awesome,” a so-called digital-citizenship education program that the technology giant developed for schools.

The lessons include a cartoon game branded with Google’s logo and blue, red, yellow and green color palette. The game is meant to help students from third grade through sixth guard against schemers, hackers and other bad actors.

But critics say the company’s recent woes — including revelations that it was developing a censored version of its search engine for the Chinese market and had tracked the whereabouts of users who had explicitly turned off their location history — should disqualify Google from promoting itself in schools as a model of proper digital conduct.

Among other things, these critics argue, the company’s lessons give children the mistaken impression that the main threat they face online is from malicious hackers and bullies, glossing over the privacy concerns that arise when tech giants like Google itself collect users’ personal information and track their actions online.

American corporate giants are no strangers to the country’s schools.

In the 1970s, General Motors circulated a free booklet in public schools that featured cartoon characters like Harry Hydrocarbon, who played down concerns about the health risks of industrial pollution and suggested that air pollution would soon not be a problem, according to a 1979 report, “Hucksters in the Classroom: A Review of Industry Propaganda in Schools.”

In the 1990s, Procter & Gamble promoted its own curriculum, “Decision: Earth,” in schools. Among other things, it instructed children that synthetic diapers were no more harmful for the environment than cloth diapers.

Around the same time, Campbell Soup sponsored a classroom kit called the “Prego Thickness Experiment.” According to a 1997 article in The New York Times, “Corporate Classrooms and Commercialism,” the kit was supposed to teach children the scientific method — by having them “prove” that Prego pasta sauce was thicker than rival Ragu.

Critics see a similar self-serving agenda with “Be Internet Awesome,” which presents malicious third parties as the primary online threat to children, while failing to teach them how to navigate corporate data-mining practices.

“Social Media” has destroyed discourse

Hossein Derakshan, an Iranian-Canadian author, media analyst, and performance artist writes in MIT Technology Review:

“Like TV, social media now increasingly entertains us, and even more so than television it amplifies our existing beliefs and habits. It makes us feel more than think, and it comforts more than challenges. The result is a deeply fragmented society, driven by emotions, and radicalized by lack of contact and challenge from outside. This is why Oxford Dictionaries designated “post-truth” as the word of 2016: an adjective “relating to circumstances in which objective facts are less influential in shaping public opinion than emotional appeals.”

[…]

Traditional television still entails some degree of surprise. What you see on television news is still picked by human curators, and even though it must be entertaining to qualify as worthy of expensive production, it is still likely to challenge some of our opinions (emotions, that is).

Social media, in contrast, uses algorithms to encourage comfort and complaisance, since its entire business model is built upon maximizing the time users spend inside of it. Who would like to hang around in a place where everyone seems to be negative, mean, and disapproving? The outcome is a proliferation of emotions, a radicalization of those emotions, and a fragmented society. This is way more dangerous for the idea of democracy founded on the notion of informed participation.

This means we should write and read more, link more often, and watch less television and fewer videos — and spend less time on Facebook, Instagram, and YouTube.

Our habits and our emotions are killing us and our planet. Let’s resist their lethal appeal.”

Data surveillance is all around us, and it’s going to change our behaviour

“Increasing aspects of our lives are now recorded as digital data that are systematically stored, aggregated, analysed, and sold. Despite the promise of big data to improve our lives, all encompassing data surveillance constitutes a new form of power that poses a risk not only to our privacy, but to our free will.

A more worrying trend is the use of big data to manipulate human behaviour at scale by incentivising “appropriate” activities, and penalising “inappropriate” activities. In recent years, governments in the UK, US, and Australia have been experimenting with attempts to “correct” the behaviour of their citizens through “nudge units”.”

Nudge units: “In ways you don’t detect [corporations and governments are] subtly influencing your decisions, pushing you towards what it believes are your (or its) best interests, exploiting the biases and tics of the human brain uncovered by research into behavioural psychology. And it is trying this in many different ways on many different people, running constant trials of different unconscious pokes and prods, to work out which is the most effective, which improves the most lives, or saves the most money. Preferably, both.”

“In his new book Inside the Nudge Unit, published this week in Britain, Halpern explains his fascination with behavioural psychology.

”Our brains weren’t made for the day-to-day financial judgments that are the foundation of modern economies: from mortgages, to pensions, to the best buy in a supermarket. Our thinking and decisions are fused with emotion.”

There’s a window of opportunity for governments, Halpern believes: to exploit the gaps between perception, reason, emotion and reality, and push us the “right” way.

He gives me a recent example of BI’s work – they were looking at police recruitment, and how to get a wider ethnic mix.

Just before applicants did an online recruitment test, in an email sending the link, BI added a line saying “before you do this, take a moment to think about why joining the police is important to you and your community”.

There was no effect on white applicants. But the pass rate for black and minority ethnic applicants moved from 40 to 60 per cent.

”It entirely closes the gap,” Halpern says. “Absolutely amazing. We thought we had good grounds in the [scientific research] literature that such a prompt might make a difference, but the scale of the difference was extraordinary.

Halpern taught social psychology at Cambridge but spent six years in the Blair government’s strategy unit. An early think piece on behavioural policy-making was leaked to the media and caused a small storm – Blair publicly disowned it and that was that. Halpern returned to academia, but was lured back after similar ideas started propagating through the Obama administration, and Cameron was persuaded to give it a go.

Ministers tend not to like it – once, one snapped, “I didn’t spend a decade in opposition to come into government to run a pilot”, but the technique is rife in the digital commercial world, where companies like Amazon or Google try 20 different versions of a web page.

Governments and public services should do it too, Halpern says. His favourite example is Britain’s organ donor register. They tested eight alternative online messages prompting people to join, including a simple request, different pictures, statistics or conscience-tweaking statements like “if you needed an organ transplant would you have one? If so please help others”.

It’s not obvious which messages work best, even to an expert. The only way to find out is to test them. They were surprised to find that the picture (of a group of people) actually put people off, Halpern says.

In future they want to use demographic data to personalise nudges, Halpern says. On tax reminder notices, they had great success putting the phrase “most people pay their tax on time” at the top. But a stubborn top 5 per cent, with the biggest tax debts, saw this reminder and thought, “Well, I’m not most people”.

This whole approach raises ethical issues. Often you can’t tell people they’re being experimented on – it’s impractical, or ruins the experiment, or both.

”If we’re trying to find the best way of saying ‘don’t drop your litter’ with a sign saying ‘most people don’t drop litter’, are you supposed to have a sign before it saying ‘caution you are about to participate in a trial’?

”Where should we draw the line between effective communication and unacceptable ‘PsyOps’ or propaganda?”