Resources

CIA covert influence operation against China

Donald Trump authorized the Central Intelligence Agency to launch a clandestine campaign on Chinese social media aimed at turning public opinion in China against its government, according to former U.S. officials with direct knowledge of the highly classified operation.

Three former officials told Reuters that the CIA created a small team of operatives who used bogus internet identities to spread negative narratives about Xi Jinping’s government while leaking disparaging intelligence to overseas news outlets. The effort, which began in 2019, has not been previously reported.

Although the U.S. officials declined to provide specific details of these operations, they said the disparaging narratives were based in fact despite being secretly released by intelligence operatives under false cover. The efforts within China were intended to foment paranoia among top leaders there, forcing its government to expend resources chasing intrusions into Beijing’s tightly controlled internet, two former officials said. “We wanted them chasing ghosts,” one of these former officials said.

Reuters was unable to determine the impact of the secret operations or whether the administration of President Joe Biden has maintained the CIA program.

Covert messaging allows the United States to implant ideas in countries where censorship might prevent that information from coming to light, or in areas where audiences wouldn’t give much credence to U.S. government statements, said Loch Johnson, a University of Georgia political scientist who studies the use of such tactics.

Covert propaganda campaigns were common during the Cold War, when the CIA planted 80 to 90 articles a day in an effort to undermine the Soviet Union, Johnson said. In the 1950s, for example, the CIA created an astrological magazine in East Germany to publish foreboding predictions about communist leaders, according to declassified records.

The covert propaganda campaign against Beijing could backfire, said Heer, the former CIA analyst. China could use evidence of a CIA influence program to bolster its decades-old accusations of shadowy Western subversion, helping Beijing “proselytize” in a developing world already deeply suspicious of Washington.

122

How China Uses Western Influencers As Pawns In Its Propaganda War

China is recruiting YouTubers to report on the country in a positive light and counter the West’s increasingly negative perceptions. “The videos have a casual, homespun feel. But on the other side of the camera often stands a large apparatus of government organizers, state-controlled news media and other official amplifiers — all part of the Chinese government’s widening attempts to spread pro-Beijing messages around the planet,” the report says. “State-run news outlets and local governments have organized and funded pro-Beijing influencers’ travel, according to government documents and the creators themselves. They have paid or offered to pay the creators. They have generated lucrative traffic for the influencers by sharing videos with millions of followers on YouTube, Twitter and Facebook.”

Typically, the Chinese government support comes in the form of free organized trips around China, particularly in Xinjiang. By showing the influencers a carefully sanitized image of life in the country, the authorities don’t need to worry about negative stories. They simply make it easy for the YouTubers to present images of jolly peasants and happy city-dwellers, because that’s all they are allowed to see. One of the authors of the New York Times piece, Paul Mozur, noted on Twitter another important way that the authorities are able to help their influencer guests. Once produced, the China-friendly videos are boosted massively by state media and diplomatic Facebook and Twitter accounts: “One video by Israeli influencer Raz Gal-Or portraying Xinjiang as ‘totally normal’ was shared by 35 government connected accounts with a total of 400 million followers. Many were Chinese embassy Facebook accounts, which posted about the video in numerous languages.”

A new report from the Australian Strategic Policy Institute, “Borrowing mouths to speak on Xinjiang,” has some more statistics on this practice: “Our data collection has found that, between January 2020 and August 2021, 156 Chinese state-controlled accounts on US-based social media platforms have published at least 546 Facebook posts, Twitter posts and shared articles from [China Global Television Network], Global Times, Xinhua or China Daily websites that have amplified Xinjiang-related social media content from 13 influencer accounts. More than 50% of that activity occurred on Facebook.” Mozur says that the use of Western influencers in this way also allows employees of Beijing-controlled media, like the journalist Li Jingjing, to present themselves as independent YouTubers. On Twitter, however, she is labeled as “China state-affiliated media.” The Australian Strategic Policy Institute sees this as part of a larger problem (pdf): “labelling schemes adopted by some video-sharing and social media platforms to identify state-affiliated accounts are inconsistently applied to media outlets and journalists working for those outlets. In addition, few platforms appear to have clear policies on content from online influencers or vloggers whose content may be facilitated by state-affiliated media, through sponsored trips, for example.”

According to Mozur, China’s state broadcaster is actively looking for more influencers, offering bonuses and publicity for those who sign up. In the US, China’s consulate general is paying $300,000 to a firm to recruit influencers for the Winter Olympics, ranging from Celebrity Influencers with millions of Instagram or TikTok followers, to Nano Influencers, with merely a few thousand. The ultimate goal of deploying these alternative voices is not to disprove negative stories appearing in Western media, but something arguably worse, as the New York Times report explains: “China is the new super-abuser that has arrived in global social media,” said Eric Liu, a former content moderator for Chinese social media. “The goal is not to win, but to cause chaos and suspicion until there is no real truth.”

413

Twitter Bots Are a Major Source of Climate Disinformation

Twitter accounts run by machines are a major source of climate change disinformation that might drain support from policies to address rising temperatures. In the weeks surrounding former President Trump’s announcement about withdrawing from the Paris Agreement, accounts suspected of being bots accounted for roughly a quarter of all tweets about climate change, according to new research. “If we are to effectively address the existential crisis of climate change, bot presence in the online discourse is a reality that scientists, social movements and those concerned about democracy have to better grapple with,” wrote Thomas Marlow, a postdoctoral researcher at the New York University, Abu Dhabi, campus, and his co-authors. Their paper published last week in the journal Climate Policy is part of an expanding body of research about the role of bots in online climate discourse.

The new focus on automated accounts is driven partly by the way they can distort the climate conversation online. Marlow’s team measured the influence of bots on Twitter’s climate conversation by analyzing 6.8 million tweets sent by 1.6 million users between May and June 2017. Trump made his decision to ditch the climate accord on June 1 of that year. President Biden reversed the decision this week. From that dataset, the team ran a random sample of 184,767 users through the Botometer, a tool created by Indiana University’s Observatory on Social Media, which analyzes accounts and determines the likelihood that they are run by machines.

Researchers also categorized the 885,164 tweets those users had sent about climate change during the two-month study period. The most popular categories were tweets about climate research and news. Marlow and the other researchers determined that nearly 9.5% of the users in their sample were likely bots. But those bots accounted for 25% of the total tweets about climate change on most days. […] The researchers weren’t able to determine who deployed the bots. But they suspect the seemingly fake accounts could have been created by “fossil-fuel companies, petro-states or their surrogates,” all of which have a vested interest in preventing or delaying action on climate change.

614

How Big Oil Misled The Public Into Believing Plastic Would Be Recycled

NPR and PBS Frontline spent months digging into internal industry documents and interviewing top former officials. We found that the industry sold the public on an idea it knew wouldn’t work — that the majority of plastic could be, and would be, recycled — all while making billions of dollars selling the world new plastic. NPR:
The industry’s awareness that recycling wouldn’t keep plastic out of landfills and the environment dates to the program’s earliest days, we found. “There is serious doubt that [recycling plastic] can ever be made viable on an economic basis,” one industry insider wrote in a 1974 speech. Yet the industry spent millions telling people to recycle, because, as one former top industry insider told NPR, selling recycling sold plastic, even if it wasn’t true. “If the public thinks that recycling is working, then they are not going to be as concerned about the environment,” Larry Thomas, former president of the Society of the Plastics Industry, known today as the Plastics Industry Association and one of the industry’s most powerful trade groups in Washington, D.C., told NPR.

In response, industry representative Steve Russell, until recently the vice president of plastics for the trade group the American Chemistry Council, said the industry has never intentionally misled the public about recycling and is committed to ensuring all plastic is recycled. […] Here’s the basic problem: All used plastic can be turned into new things, but picking it up, sorting it out and melting it down is expensive. Plastic also degrades each time it is reused, meaning it can’t be reused more than once or twice. On the other hand, new plastic is cheap. It’s made from oil and gas, and it’s almost always less expensive and of better quality to just start fresh. All of these problems have existed for decades, no matter what new recycling technology or expensive machinery has been developed. In all that time, less than 10 percent of plastic has ever been recycled. But the public has known little about these difficulties.

580

Long Before Cambridge Analytica, Simulmatics Linked Data and Politics

NPR reporter Shannon Bond reports of a little-known — and now nearly entirely forgotten — company called Simulmatics, which had technology that used vast amounts of data to profile voters and ultimately help John F. Kennedy win the 1960 election. From the report:
The […] company was called Simulmatics, the subject of Harvard historian and New Yorker writer Jill Lepore’s timely new book, If Then: How the Simulmatics Corporation Invented the Future. Before Cambridge Analytica, before Facebook, before the Internet, there was Simulmatics’ “People Machine,” in Lepore’s telling: “A computer program designed to predict and manipulate human behavior, all sorts of human behavior, from buying a dishwasher to countering an insurgency to casting a vote.”

Lepore unearths Simulmatics’ story and makes the argument that, amid a broader proliferation of behavioral science research across academia and government in the 1960s, the company paved the way for our 21st-century obsession with data and prediction. Simulmatics, she argues, is “a missing link in the history of technology,” the antecedent to Facebook, Google and Amazon and to algorithms that attempt to forecast who will commit crimes or get good grades. “It lurks behind the screen of every device,” she writes.

If Then presents Simulmatics as both ahead of its time and, more often than not, overpromising and under-delivering. The company was the brainchild of Ed Greenfield, an advertising executive straight out of Mad Men, who believed computers could help Democrats recapture the White House. He wanted to create a model of the voting population that could tell you how voters would respond to whatever a candidate did or said. The name Simulmatics was a contraction of “simulation” and “automation.” As Greenfield explained it to investors, Lepore writes: “The Company proposes to engage principally in estimating probable human behavior by the use of computer technology.” The People Machine was originally built to analyze huge amounts of data ahead of the 1960 election, in what Lepore describes as, at the time, “the largest political science research project in American history.”

530

From Climate Change to the Dangers of Smoking: How Powerful Interests Made Us Doubt Everything

BBC News reports:
In 1991, the trade body that represents electrical companies in the U.S., the Edison Electric Institute, created a campaign called the Information Council for the Environment which aimed to “Reposition global warming as theory (not fact)”. Some details of the campaign were leaked to the New York Times. “They ran advertising campaigns designed to undermine public support, cherry picking the data to say, ‘Well if the world is warming up, why is Kentucky getting colder?’ They asked rhetorical questions designed to create confusion, to create doubt,” argued Naomi Oreskes, professor of the history of science at Harvard University and co-author of Merchants of Doubt. But back in the 1990 there were many campaigns like this…

Most of the organisations opposing or denying climate change science were right-wing think tanks, who tended to be passionately anti-regulation. These groups made convenient allies for the oil industry, as they would argue against action on climate change on ideological grounds. Jerry Taylor spent 23 years with the Cato Institute — one of those right wing think tanks — latterly as vice president. Before he left in 2014, he would regularly appear on TV and radio, insisting that the science of climate change was uncertain and there was no need to act.

Now, he realises his arguments were based on a misinterpretation of the science, and he regrets the impact he’s had on the debate.

Harvard historian Naomi Oreskes discovered leading climate-change skeptics had also been prominent skeptics on the dangers of cigarette smoking. “That was a Eureka moment,” Oreskes tells BBC News. “We realised this was not a scientific debate.”

Decades before the energy industry tried to undermine the case for climate change, tobacco companies had used the same techniques to challenge the emerging links between smoking and lung cancer in the 1950s… As a later document by tobacco company Brown and Williamson summarised the approach: “Doubt is our product, since it is the best means of competing with the ‘body of fact’ that exists in the minds of the general public.” Naomi Oreskes says this understanding of the power of doubt is vital. “They realise they can’t win this battle by making a false claim that sooner or later would be exposed. But if they can create doubt, that would be sufficient — because if people are confused about the issue, there’s a good chance they’ll just keep smoking….”

Academics like David Michaels, author of The Triumph of Doubt, fear the use of uncertainty in the past to confuse the public and undermine science has contributed to a dangerous erosion of trust in facts and experts across the globe today, far beyond climate science or the dangers of tobacco. He cites public attitudes to modern issues like the safety of 5G, vaccinations — and coronavirus.

“By cynically manipulating and distorting scientific evidence, the manufacturers of doubt have seeded in much of the public a cynicism about science, making it far more difficult to convince people that science provides useful — in some cases, vitally important — information.

571

The Rise of the Deepfake and the threat to Democracy

Deepfakes posted on the internet in the past two years, has alarmed many observers, who believe the technology could be used to disgrace politicians and even swing elections. Democracies appear to be gravely threatened by the speed at which disinformation can be created and spread via social media, where the incentive to share the most sensationalist content outweighs the incentive to perform the tiresome work of verification.

Last month, a digitally altered video showing Nancy Pelosi, the speaker of the US House of Representatives, appearing to slur drunkenly through a speech was widely shared on Facebook and YouTube. Trump then posted the clip on Twitter with the caption: “PELOSI STAMMERS THROUGH NEWS CONFERENCE”. The video was quickly debunked, but not before it had been viewed millions of times; the president did not delete his tweet, which at the time of writing has received nearly 98,000 likes. Facebook declined to take down the clip, qualifying its decision with the statement: “Once the video was fact-checked as false, we dramatically reduced its distribution.”

In response, a team including the artists Bill Posters and Daniel Howe two weeks ago posted a video on Instagram, in which Facebook founder Mark Zuckerberg boasts that he has “total control of billions of people’s stolen data, all their secrets, their lives, their futures”.

In May 2018, a Flemish socialist party called sp.a posted a deepfake video to its Twitter and Facebook pages showing Trump appearing to taunt Belgium for remaining in the Paris climate agreement. The video, which remains on the party’s social media, is a poor forgery: Trump’s hair is curiously soft-focus, while his mouth moves with a Muppet-like elasticity. Indeed, the video concludes with Trump saying: “We all know that climate change is fake, just like this video,” although this sentence alone is not subtitled in Flemish Dutch. (The party declined to comment, but a spokesperson previously told the site Politico that it commissioned the video to “draw attention to the necessity to act on climate change”.)

But James [founder of the YouTube channel derpfakes’ that publishes deepfake videos] believes forgeries may have gone undetected. “The idea that deepfakes have already been used politically isn’t so farfetched,” he says. “It could be the case that deepfakes have already been widely used for propaganda.”

685

New Deepfake Algorithm Allows You To Text-Edit the Words of a Speaker In a Video

It is now possible to take a talking-head style video, and add, delete or edit the speaker’s words as simply as you’d edit text in a word processor. A new deepfake algorithm can process the audio and video into a new file in which the speaker says more or less whatever you want them to. New Atlas reports:

It’s the work of a collaborative team from Stanford University, Max Planck Institute for Informatics, Princeton University and Adobe Research, who say that in a perfect world the technology would be used to cut down on expensive re-shoots when an actor gets something wrong, or a script needs to be changed. In order to learn the face movements of a speaker, the algorithm requires about 40 minutes of training video, and a transcript of what’s being said, so it’s not something that can be thrown onto a short video snippet and run if you want good results. That 40 minutes of video gives the algorithm the chance to work out exactly what face shapes the subject is making for each phonetic syllable in the original script.

From there, once you edit the script, the algorithm can then create a 3D model of the face making the new shapes required. And from there, a machine learning technique called Neural Rendering can paint the 3D model over with photo-realistic textures to make it look basically indistinguishable from the real thing. Other software such as VoCo can be used if you wish to generate the speaker’s audio as well as video, and it takes the same approach, by breaking down a heap of training audio into phonemes and then using that dataset to generate new words in a familiar voice.

734

Google Is Teaching Children How to Act Online. Is It the Best Role Model?

Google is on a mission to teach children how to be safe online. That is the message behind “Be Internet Awesome,” a so-called digital-citizenship education program that the technology giant developed for schools.

The lessons include a cartoon game branded with Google’s logo and blue, red, yellow and green color palette. The game is meant to help students from third grade through sixth guard against schemers, hackers and other bad actors.

But critics say the company’s recent woes — including revelations that it was developing a censored version of its search engine for the Chinese market and had tracked the whereabouts of users who had explicitly turned off their location history — should disqualify Google from promoting itself in schools as a model of proper digital conduct.

Among other things, these critics argue, the company’s lessons give children the mistaken impression that the main threat they face online is from malicious hackers and bullies, glossing over the privacy concerns that arise when tech giants like Google itself collect users’ personal information and track their actions online.

American corporate giants are no strangers to the country’s schools.

In the 1970s, General Motors circulated a free booklet in public schools that featured cartoon characters like Harry Hydrocarbon, who played down concerns about the health risks of industrial pollution and suggested that air pollution would soon not be a problem, according to a 1979 report, “Hucksters in the Classroom: A Review of Industry Propaganda in Schools.”

In the 1990s, Procter & Gamble promoted its own curriculum, “Decision: Earth,” in schools. Among other things, it instructed children that synthetic diapers were no more harmful for the environment than cloth diapers.

Around the same time, Campbell Soup sponsored a classroom kit called the “Prego Thickness Experiment.” According to a 1997 article in The New York Times, “Corporate Classrooms and Commercialism,” the kit was supposed to teach children the scientific method — by having them “prove” that Prego pasta sauce was thicker than rival Ragu.

Critics see a similar self-serving agenda with “Be Internet Awesome,” which presents malicious third parties as the primary online threat to children, while failing to teach them how to navigate corporate data-mining practices.

848

“Social Media” has destroyed discourse

Hossein Derakshan, an Iranian-Canadian author, media analyst, and performance artist writes in MIT Technology Review:

“Like TV, social media now increasingly entertains us, and even more so than television it amplifies our existing beliefs and habits. It makes us feel more than think, and it comforts more than challenges. The result is a deeply fragmented society, driven by emotions, and radicalized by lack of contact and challenge from outside. This is why Oxford Dictionaries designated “post-truth” as the word of 2016: an adjective “relating to circumstances in which objective facts are less influential in shaping public opinion than emotional appeals.”

[…]

Traditional television still entails some degree of surprise. What you see on television news is still picked by human curators, and even though it must be entertaining to qualify as worthy of expensive production, it is still likely to challenge some of our opinions (emotions, that is).

Social media, in contrast, uses algorithms to encourage comfort and complaisance, since its entire business model is built upon maximizing the time users spend inside of it. Who would like to hang around in a place where everyone seems to be negative, mean, and disapproving? The outcome is a proliferation of emotions, a radicalization of those emotions, and a fragmented society. This is way more dangerous for the idea of democracy founded on the notion of informed participation.

This means we should write and read more, link more often, and watch less television and fewer videos — and spend less time on Facebook, Instagram, and YouTube.

Our habits and our emotions are killing us and our planet. Let’s resist their lethal appeal.”

809

Data surveillance is all around us, and it’s going to change our behaviour

“Increasing aspects of our lives are now recorded as digital data that are systematically stored, aggregated, analysed, and sold. Despite the promise of big data to improve our lives, all encompassing data surveillance constitutes a new form of power that poses a risk not only to our privacy, but to our free will.

A more worrying trend is the use of big data to manipulate human behaviour at scale by incentivising “appropriate” activities, and penalising “inappropriate” activities. In recent years, governments in the UK, US, and Australia have been experimenting with attempts to “correct” the behaviour of their citizens through “nudge units”.”

Nudge units: “In ways you don’t detect [corporations and governments are] subtly influencing your decisions, pushing you towards what it believes are your (or its) best interests, exploiting the biases and tics of the human brain uncovered by research into behavioural psychology. And it is trying this in many different ways on many different people, running constant trials of different unconscious pokes and prods, to work out which is the most effective, which improves the most lives, or saves the most money. Preferably, both.”

“In his new book Inside the Nudge Unit, published this week in Britain, Halpern explains his fascination with behavioural psychology.

”Our brains weren’t made for the day-to-day financial judgments that are the foundation of modern economies: from mortgages, to pensions, to the best buy in a supermarket. Our thinking and decisions are fused with emotion.”

There’s a window of opportunity for governments, Halpern believes: to exploit the gaps between perception, reason, emotion and reality, and push us the “right” way.

He gives me a recent example of BI’s work – they were looking at police recruitment, and how to get a wider ethnic mix.

Just before applicants did an online recruitment test, in an email sending the link, BI added a line saying “before you do this, take a moment to think about why joining the police is important to you and your community”.

There was no effect on white applicants. But the pass rate for black and minority ethnic applicants moved from 40 to 60 per cent.

”It entirely closes the gap,” Halpern says. “Absolutely amazing. We thought we had good grounds in the [scientific research] literature that such a prompt might make a difference, but the scale of the difference was extraordinary.

Halpern taught social psychology at Cambridge but spent six years in the Blair government’s strategy unit. An early think piece on behavioural policy-making was leaked to the media and caused a small storm – Blair publicly disowned it and that was that. Halpern returned to academia, but was lured back after similar ideas started propagating through the Obama administration, and Cameron was persuaded to give it a go.

Ministers tend not to like it – once, one snapped, “I didn’t spend a decade in opposition to come into government to run a pilot”, but the technique is rife in the digital commercial world, where companies like Amazon or Google try 20 different versions of a web page.

Governments and public services should do it too, Halpern says. His favourite example is Britain’s organ donor register. They tested eight alternative online messages prompting people to join, including a simple request, different pictures, statistics or conscience-tweaking statements like “if you needed an organ transplant would you have one? If so please help others”.

It’s not obvious which messages work best, even to an expert. The only way to find out is to test them. They were surprised to find that the picture (of a group of people) actually put people off, Halpern says.

In future they want to use demographic data to personalise nudges, Halpern says. On tax reminder notices, they had great success putting the phrase “most people pay their tax on time” at the top. But a stubborn top 5 per cent, with the biggest tax debts, saw this reminder and thought, “Well, I’m not most people”.

This whole approach raises ethical issues. Often you can’t tell people they’re being experimented on – it’s impractical, or ruins the experiment, or both.

”If we’re trying to find the best way of saying ‘don’t drop your litter’ with a sign saying ‘most people don’t drop litter’, are you supposed to have a sign before it saying ‘caution you are about to participate in a trial’?

”Where should we draw the line between effective communication and unacceptable ‘PsyOps’ or propaganda?”

823