Resources

Zuckerberg Says He Regrets Not Being More Outspoken About ‘Government Pressure’

In a letter to the House Judiciary Committee, Meta CEO Mark Zuckerberg expressed regret for not being more vocal about “government pressure” to censor COVID-19-related content. He also acknowledged that Meta shouldn’t have demoted a New York Post story about President Biden’s family before the 2020 election. The Hill reports:
Zuckerberg said senior Biden administration officials “repeatedly pressured” Meta, the parent company of Facebook and Instagram, to “censor” content in 2021. “I believe the government pressure was wrong, and I regret that we were not more outspoken,” he wrote to House Judiciary Chair Jim Jordan (R-Ohio). “Like I said to our teams at the time, I feel strongly that we should not compromise our content standards due to pressure from any Administration in either direction — and we’re ready to push back if something like this happens again,” Zuckerberg added.

The Meta CEO also said the company “shouldn’t have demoted” a New York Post story about corruption allegations involving President Biden’s family ahead of the 2020 election while waiting for fact-checkers to review it. The social media company has since updated its policies and processes, including no longer demoting content in the U.S. while waiting for fact-checkers, he noted. Zuckerberg also said in Monday’s letter that he does not plan to make contributions to local jurisdictions to support election infrastructure this cycle, like he did during the 2020 election.

The contributions, which were “designed to be non-partisan,” were accused of being unfairly distributed between left-leaning and right-leaning areas and labeled “Zuckerbucks” by Republicans. “Still, despite the analyses I’ve seen showing otherwise, I know that some people believe this work benefited one party over the other,” Zuckerberg said. “My goal is to be neutral and not play a role one way or another — or to even appear to be playing a role.”
House Judiciary Republicans touted the letter as a “big win for free speech,” writing on X: “Mark Zuckerberg just admitted three things: 1. Biden-Harris Admin ‘pressured’ Facebook to censor Americans. 2. Facebook censored Americans. 3. Facebook throttled the Hunter Biden laptop story.”

“Mark Zuckerberg also tells the Judiciary Committee that he won’t spend money this election cycle. That’s right, no more Zuck-bucks. Huge win for election integrity,” it added.

72

CIA covert influence operation against China

Donald Trump authorized the Central Intelligence Agency to launch a clandestine campaign on Chinese social media aimed at turning public opinion in China against its government, according to former U.S. officials with direct knowledge of the highly classified operation.

Three former officials told Reuters that the CIA created a small team of operatives who used bogus internet identities to spread negative narratives about Xi Jinping’s government while leaking disparaging intelligence to overseas news outlets. The effort, which began in 2019, has not been previously reported.

Although the U.S. officials declined to provide specific details of these operations, they said the disparaging narratives were based in fact despite being secretly released by intelligence operatives under false cover. The efforts within China were intended to foment paranoia among top leaders there, forcing its government to expend resources chasing intrusions into Beijing’s tightly controlled internet, two former officials said. “We wanted them chasing ghosts,” one of these former officials said.

Reuters was unable to determine the impact of the secret operations or whether the administration of President Joe Biden has maintained the CIA program.

Covert messaging allows the United States to implant ideas in countries where censorship might prevent that information from coming to light, or in areas where audiences wouldn’t give much credence to U.S. government statements, said Loch Johnson, a University of Georgia political scientist who studies the use of such tactics.

Covert propaganda campaigns were common during the Cold War, when the CIA planted 80 to 90 articles a day in an effort to undermine the Soviet Union, Johnson said. In the 1950s, for example, the CIA created an astrological magazine in East Germany to publish foreboding predictions about communist leaders, according to declassified records.

The covert propaganda campaign against Beijing could backfire, said Heer, the former CIA analyst. China could use evidence of a CIA influence program to bolster its decades-old accusations of shadowy Western subversion, helping Beijing “proselytize” in a developing world already deeply suspicious of Washington.

122

Facebook Misinformation Is Bad Enough, The Metaverse Will Be Worse

The Rand Corporation is an American (nonprofit) think tank. And veliath (Slashdot reader #5,435) spotted their recent warning about “a plausible scenario that could soon take place in the metaverse.”
A political candidate is giving a speech to millions of people. While each viewer thinks they are seeing the same version of the candidate, in virtual reality they are actually each seeing a slightly different version. For each and every viewer, the candidate’s face has been subtly modified to resemble the viewer…. The viewers are unaware of any manipulation of the image. Yet they are strongly influenced by it: Each member of the audience is more favorably disposed to the candidate than they would have been without any digital manipulation.

This is not speculation. It has long been known that mimicry can be exploited as a powerful tool for influence. A series of experiments by Stanford researchers has shown that slightly changing the features of an unfamiliar political figure to resemble each voter made people rate politicians more favorably. The experiments took pictures of study participants and real candidates in a mock-up of an election campaign. The pictures of each candidate were modified to resemble each participant. The studies found that even if 40 percent of the participant’s features were blended into the candidate’s face, the participants were entirely unaware the image had been manipulated.

In the metaverse, it’s easy to imagine this type of mimicry at a massive scale.

At the heart of all deception is emotional manipulation. Virtual reality environments, such as Facebook’s (now Meta’s) metaverse, will enable psychological and emotional manipulation of its users at a level unimaginable in today’s media…. We are not even close to being able to defend users against the threats posed by this coming new medium…. In VR, body language and nonverbal signals such as eye gaze, gestures, or facial expressions can be used to communicate intentions and emotions. Unlike verbal language, we often produce and perceive body language subconsciously….

We must not wait until these technologies are fully realized to consider appropriate guardrails for them. We can reap the benefits of the metaverse while minimizing its potential for great harm.

They recommend developing technology that detect the application of this kind of VR manipulation.

“Society did not start paying serious attention to classical social media — meaning Facebook, Twitter, and the like — until things got completely out of hand. Let us not make the same mistake as social media blossoms into the metaverse.”

259

The Facebook Loophole that Lets World Leaders Deceive and Harass Their Citizens

Facebook has repeatedly allowed world leaders and politicians to use its platform to deceive the public or harass opponents despite being alerted to evidence of the wrongdoing. The Guardian has seen extensive internal documentation showing how Facebook handled more than 30 cases across 25 countries of politically manipulative behavior that was proactively detected by company staff. The investigation shows how Facebook has allowed major abuses of its platform in poor, small and non-western countries in order to prioritize addressing abuses that attract media attention or affect the US and other wealthy countries. The company acted quickly to address political manipulation affecting countries such as the US, Taiwan, South Korea and Poland, while moving slowly or not at all on cases in Afghanistan, Iraq, Mongolia, Mexico and much of Latin America.

500

Zuckerberg Acknowledges ‘Risk of Civil Unrest’ After US Elections, Promises Newsfeed Updates, too little too late

Facebook founder and CEO Mark Zuckerberg told analysts on a conference call Thursday evening that the company plans to post notices at the top of users’ news feeds on November 3rd discrediting any claims by either candidate in the U.S. presidential election that they have won the election if the site deems the claim premature… The move, said Zuckerberg, is being made because “There is a risk of civil unrest across the country, and given this, companies like ours need to go well beyond what we’ve done before.”

The conference call with analysts followed a third-quarter earnings report Thursday afternoon in which Facebook’s results topped expectations, helped by gains in active users that also were higher than Wall Street expected.

Zuckerberg said Facebook “helped 4.4 million people register [to vote] exceeding the goal that we set for ourselves this summer.”

504

Facebook Will Pay Users To Log Off Before 2020 Election

Facebook is offering users money to refrain from using the site and Instagram in the weeks leading up to the bitterly contested November elections. The New York Post reports:
To assess the impact of social media on voting, the company will pay selected members up to $120 to deactivate their accounts beginning at the end of September. “Anyone who chooses to opt-in — whether it’s completing surveys or deactivating FB or IG for a period of time — will be compensated,” Facebook spokesperson Liz Bourgeois tweeted last week. “This is fairly standard for this type of academic research.” The Silicon Valley giant said it expects 200,000 to 400,000 people to take part.

“Representative, scientific samples of people in the US will be selected and invited to participate in the study. Some potential participants will see a notice in Facebook or Instagram inviting them to take part in the study,” Facebook said. “Study samples will be designed to ensure that participants mirror the diversity of the US adult population, as well as users of Facebook and Instagram.” The results of the study are expected to be released sometime next year.

545

A Whistleblower Says Facebook Ignored Global Political Manipulation

Facebook ignored or was slow to act on evidence that fake accounts on its platform have been undermining elections and political affairs around the world, according to an explosive memo sent by a recently fired Facebook employee and obtained by BuzzFeed News. From the report:
The 6,600-word memo, written by former Facebook data scientist Sophie Zhang, is filled with concrete examples of heads of government and political parties in Azerbaijan and Honduras using fake accounts or misrepresenting themselves to sway public opinion. In countries including India, Ukraine, Spain, Bolivia, and Ecuador she found evidence of coordinated campaigns of varying sizes to boost or hinder political candidates or outcomes, though she did not always conclude who was behind them. “In the three years I’ve spent at Facebook, I’ve found multiple blatant attempts by foreign national governments to abuse our platform on vast scales to mislead their own citizenry, and caused international news on multiple occasions,” wrote Zhang, who declined to talk to BuzzFeed News. Her Linkedin profile said she “worked as the data scientist for the Facebook Site Integrity fake engagement team” and dealt with “bots influencing elections and the like.”

“I have personally made decisions that affected national presidents without oversight, and taken action to enforce against so many prominent politicians globally that I’ve lost count,” she wrote. The memo is a damning account of Facebook’s failures. It’s the story of Facebook abdicating responsibility for malign activities on its platform that could affect the political fate of nations outside the United States or Western Europe. It’s also the story of a junior employee wielding extraordinary moderation powers that affected millions of people without any real institutional support, and the personal torment that follow

520

Long Before Cambridge Analytica, Simulmatics Linked Data and Politics

NPR reporter Shannon Bond reports of a little-known — and now nearly entirely forgotten — company called Simulmatics, which had technology that used vast amounts of data to profile voters and ultimately help John F. Kennedy win the 1960 election. From the report:
The […] company was called Simulmatics, the subject of Harvard historian and New Yorker writer Jill Lepore’s timely new book, If Then: How the Simulmatics Corporation Invented the Future. Before Cambridge Analytica, before Facebook, before the Internet, there was Simulmatics’ “People Machine,” in Lepore’s telling: “A computer program designed to predict and manipulate human behavior, all sorts of human behavior, from buying a dishwasher to countering an insurgency to casting a vote.”

Lepore unearths Simulmatics’ story and makes the argument that, amid a broader proliferation of behavioral science research across academia and government in the 1960s, the company paved the way for our 21st-century obsession with data and prediction. Simulmatics, she argues, is “a missing link in the history of technology,” the antecedent to Facebook, Google and Amazon and to algorithms that attempt to forecast who will commit crimes or get good grades. “It lurks behind the screen of every device,” she writes.

If Then presents Simulmatics as both ahead of its time and, more often than not, overpromising and under-delivering. The company was the brainchild of Ed Greenfield, an advertising executive straight out of Mad Men, who believed computers could help Democrats recapture the White House. He wanted to create a model of the voting population that could tell you how voters would respond to whatever a candidate did or said. The name Simulmatics was a contraction of “simulation” and “automation.” As Greenfield explained it to investors, Lepore writes: “The Company proposes to engage principally in estimating probable human behavior by the use of computer technology.” The People Machine was originally built to analyze huge amounts of data ahead of the 1960 election, in what Lepore describes as, at the time, “the largest political science research project in American history.”

530

From Climate Change to the Dangers of Smoking: How Powerful Interests Made Us Doubt Everything

BBC News reports:
In 1991, the trade body that represents electrical companies in the U.S., the Edison Electric Institute, created a campaign called the Information Council for the Environment which aimed to “Reposition global warming as theory (not fact)”. Some details of the campaign were leaked to the New York Times. “They ran advertising campaigns designed to undermine public support, cherry picking the data to say, ‘Well if the world is warming up, why is Kentucky getting colder?’ They asked rhetorical questions designed to create confusion, to create doubt,” argued Naomi Oreskes, professor of the history of science at Harvard University and co-author of Merchants of Doubt. But back in the 1990 there were many campaigns like this…

Most of the organisations opposing or denying climate change science were right-wing think tanks, who tended to be passionately anti-regulation. These groups made convenient allies for the oil industry, as they would argue against action on climate change on ideological grounds. Jerry Taylor spent 23 years with the Cato Institute — one of those right wing think tanks — latterly as vice president. Before he left in 2014, he would regularly appear on TV and radio, insisting that the science of climate change was uncertain and there was no need to act.

Now, he realises his arguments were based on a misinterpretation of the science, and he regrets the impact he’s had on the debate.

Harvard historian Naomi Oreskes discovered leading climate-change skeptics had also been prominent skeptics on the dangers of cigarette smoking. “That was a Eureka moment,” Oreskes tells BBC News. “We realised this was not a scientific debate.”

Decades before the energy industry tried to undermine the case for climate change, tobacco companies had used the same techniques to challenge the emerging links between smoking and lung cancer in the 1950s… As a later document by tobacco company Brown and Williamson summarised the approach: “Doubt is our product, since it is the best means of competing with the ‘body of fact’ that exists in the minds of the general public.” Naomi Oreskes says this understanding of the power of doubt is vital. “They realise they can’t win this battle by making a false claim that sooner or later would be exposed. But if they can create doubt, that would be sufficient — because if people are confused about the issue, there’s a good chance they’ll just keep smoking….”

Academics like David Michaels, author of The Triumph of Doubt, fear the use of uncertainty in the past to confuse the public and undermine science has contributed to a dangerous erosion of trust in facts and experts across the globe today, far beyond climate science or the dangers of tobacco. He cites public attitudes to modern issues like the safety of 5G, vaccinations — and coronavirus.

“By cynically manipulating and distorting scientific evidence, the manufacturers of doubt have seeded in much of the public a cynicism about science, making it far more difficult to convince people that science provides useful — in some cases, vitally important — information.

571

Google Search and Dark Patterns

Previously, the search engine had marked paid results with the word “Ad” in a green box, tucked beneath the headline next to a matching green display URL. Now, all of a sudden, the “Ad” and the URL shifted above the headline, and both were rendered in discreet black; the box disappeared. The organic search results underwent a similar makeover, only with a new favicon next to the URL instead of the word “Ad.” The result was a general smoothing: Ads looked like not-ads. Not-ads looked like ads. This was not Google’s first time fiddling with the search results interface. In fact, it had done so quite regularly over the last 13 years, as handily laid out in a timeline from the news site Search Engine Land. Each iteration whittled away the distinction between paid and unpaid content that much more. Most changes went relatively unnoticed, internet residents accepting the creep like the apocryphal frog in a slowly boiling pot.

But in January, amid rising antitrust drumbeats and general exhaustion with Big Tech, people noticed. Interface designers, marketers, and Google users alike decried the change, saying it made paid results practically indistinguishable from those that Google’s search algorithm served up organically. The phrase that came up most often: “dark pattern,” a blanket term coined by UX specialist Harry Brignull to describe manipulative design elements that benefit companies over their users. That a small design tweak could inspire so much backlash speaks to the profound influence Google and other ubiquitous platforms have — and the responsibility that status confers to them. “Google and Facebook shape realities,” says Kat Zhou, a product designer who has created a framework and toolkit to help promote ethical design. “Students and professors turn to Google for their research. Folks turn to Facebook for political news. Communities turn to Google for Covid-19 updates. In some sense, Google and Facebook have become arbiters of the truth. That’s particularly scary when you factor in their business models, which often incentivize blurring the line between news and advertisements.”

Google’s not the only search engine to blur this line. If anything, Bing is even more opaque, sneaking the “Ad” disclosure under the header, with only a faint outline to draw attention. […] But Google has around 92 percent of global search marketshare. It effectively is online search. Dark patterns are all too common online in general, and January wasn’t the first time people accused Google of deploying them. In June of 2018, a blistering report from the Norwegian Consumer Council found that Google and Facebook both used specific interface choices to strip away user privacy at almost every turn. The study details how both platforms implemented the least privacy-friendly options by default, consistently “nudged” users toward giving away more of their data, and more. It paints a portrait of a system designed to befuddle users into complacency. […] That confusion reached its apex a few months later, when an Associated Press investigation found that disabling Location History on your smartphone did not, in fact, stop Google from collecting your location in all instances.

579

Chinese Newspaper Touts Videogame Where Players ‘Hunt Down Traitors’ in Hong Kong

An online game calling on players to hunt down traitors who seek to separate Hong Kong from China and fuel street violence has reportedly begun to attract players across Chinese mainland social media platforms. The game, “fight the traitors together,” is set against the backdrop of the social unrest that has persisted in Hong Kong. The script asks the player to find eight secessionists hidden in the crowd participating in Hong Kong protests.

Players can knock them down with slaps or rotten eggs until they are captured. Online gamers claim the game allows them to vent their anger at the separatist behavior of secessionists during the recent Hong Kong riots. The eight traitors in the game, caricatured based on real people, include Jimmy Lai Chee-ying, Martin Lee Chu-ming and Joshua Wong Chi-fung, prominent opposition figures who have played a major role in inciting unrest in Hong Kong. There are also traitor figures in ancient China…

In the game, amid a crowd of black-clad rioters wearing yellow hats and face masks, Anson Chan Fang On-sang, another leading opposition figure, carries a bag with a U.S. flag, clutches a stack of U.S. dollars and holds a loudspeaker to incite violence in the streets.

646

Study of Over 11,000 Online Stores Finds ‘Dark Patterns’ on 1,254 sites

A large-scale academic study that analyzed more than 53,000 product pages on more than 11,000 online stores found widespread use of user interface “dark patterns” — practices meant to mislead customers into making purchases based on false or misleading information.

The study — presented last week at the ACM CSCW 2019 conference — found 1,818 instances of dark patterns present on 1,254 of the ~11K shopping websites (~11.1%) researchers scanned. “Shopping websites that were more popular, according to Alexa rankings, were more likely to feature dark patterns,” researchers said. But while the vast majority of UI dark patterns were meant to trick users into subscribing to newsletters or allowing broad data collection, some dark patterns were downright foul, trying to mislead users into making additional purchases, either by sneaking products into shopping carts or tricking users into believing products were about to sell out. Of these, the research team found 234 instances, deployed across 183 websites.

614

Leaked Documents Show Facebook Leveraged User Data To Help Friends and Fight Rivals

A cache of leaked Facebook documents shows how the company’s CEO, Mark Zuckerberg, oversaw plans to consolidate the social network’s power and control competitors by treating its users’ data as a bargaining chip. This trove comprises approximately 7,000 pages in total, of which about 4,000 are internal Facebook communications such as emails, web chats, notes, presentations and spreadsheets, primarily from 2011 to 2015. About 1,200 pages are marked as “highly confidential.” Taken together, they show how Zuckerberg, along with his board and management team, found ways to tap Facebook users’ data — including information about friends, relationships and photos — as leverage over the companies it partnered with. In some cases, Facebook would reward partners by giving them preferential access to certain types of user data while denying the same access to rival companies.

For example, Facebook gave Amazon special access to user data because it was spending money on Facebook advertising. In another case the messaging app MessageMe was cut off from access to data because it had grown too popular and could compete with Facebook. All the while, Facebook planned to publicly frame these moves as a way to protect user privacy, the documents show. State and federal authorities are now closely scrutinizing Facebook’s business practices. In October, New York Attorney General Letitia James announced that 47 attorneys general from states and U.S. territories plan to take part in a New York-led antitrust probe into Facebook. Over the summer, the House Judiciary Committee held hearings over antitrust concerns in Silicon Valley while the Federal Trade Commission also continues to examine the firm’s practices.

590

Mozilla Hits Google, Facebook For ‘Microtargeting’ Political Ads

Microtargeting, a method which uses consumer data and demographics to narrowly segment audiences, is used by political campaigns to specialize ads for different voting groups. The practice’s critics include Federal Election Commission Chairwoman Ellen Weintraub, who wrote in a Washington Post op-ed that microtargeting makes it “easy to single out susceptible groups and direct political misinformation to them with little accountability, because the public at large never sees the ad.” Mozilla’s call follows reports that Facebook has considered restricting politicians’ access to microtargeting.

672

I worked on political ads at Facebook. They profit by manipulating us.

Yael Eisenstat, a visiting fellow at Cornell Tech in the Digital Life Initiative and a former elections integrity head at Facebook, CIA officer, and White House adviser, writes for the Washington Post:

As the company continues to struggle with how to handle political content and as another presidential election approaches, it’s clear that tinkering around the margins of advertising policies won’t fix the most serious issues. The real problem is that Facebook profits partly by amplifying lies and selling dangerous targeting tools that allow political operatives to engage in a new level of information warfare. Its business model exploits our data to let advertisers custom-target people, show us each a different version of the truth and manipulate us with hyper-customized ads — ads that, as of two weeks ago, can contain blatantly false and debunked information if they’re run by a political campaign. As long as Facebook prioritizes profit over healthy discourse, they can’t avoid damaging democracies.

Early in my time there, I dug into the question of misinformation in political advertising. Posting in a “tribe” (Facebook’s internal collaboration platform), I asked our teams working on political advertising whether we should incorporate the same tools for political ads that other integrity teams at Facebook were developing to address misinformation in pages and organic posts. It was unclear to me why the company was applying different, siloed policies and tools across the platform. Most users do not differentiate organic content from ads — as I clearly saw on a trip to India, where we were testing our ads-integrity products — so why were we expecting users to understand that we applied different standards to different forms of content that all just appear in their news feeds?

The fact that we were taking money for political ads and allowing campaigns and other political organizations to target users based on the vast amounts of data we had gathered meant political ads should have an even higher bar for integrity than what people were posting in organic content. We verified advertisers to run political ads, giving them a check mark and a “paid for by” label, and I questioned if that gave the false impression that we were vouching for the validity of the content, boosting its perceived credibility even though we weren’t checking any facts or ensuring that ads weren’t spreading false information. Most of my colleagues agreed. People wanted to get this right. But above me, there was no appetite for my pushing, and I was accused of “creating confusion.”

608

Would You Trust Amazon To Run Free and Fair Elections?

While it does not handle voting on election day, AWS — along with a broad network of partners — now runs state and county election websites, stores voter registration rolls and ballot data, facilitates overseas voting by military personnel and helps provide live election-night results, according to company documents and interviews… Amazon pitches itself as a low-cost provider of secure election technology at a time when local officials and political campaigns are under intense pressure to prevent a repeat of 2016 presidential elections, which saw cyber-attacks on voting systems and election infrastructure.

Most security experts Reuters spoke to said that while Amazon’s cloud is likely much harder to hack than systems it is replacing, putting data from many jurisdictions on a single system raises the prospect that a single major breach could prove damaging. “It makes Amazon a bigger target” for hackers, “and also increases the challenge of dealing with an insider attack,” said Chris Vickery, director of cyber risk research at cybersecurity startup Upguard. A recent hack into Capital One Financial Corp’s data stored on Amazon’s cloud service was perpetrated by a former Amazon employee. The breach affected more than 100 million customers, underscoring how rogue employees or untrained workers can create security risks even if the underlying systems are secure…

Vickery uncovered at least three instances where voter data on Amazon’s cloud servers was exposed to the internet, which have been reported previously. For example, in 2017, he found a Republican contractor’s database for nearly every registered American voter hosted on AWS exposed on the internet for 12 days. In 2016, he found Mexico’s entire voter database on AWS servers was leaked. Amazon said the breaches were caused by customer errors, adding that while AWS secures the cloud infrastructure, customers are responsible for security of what goes in the cloud.

681

Cambridge Analytica Whistleblower: US Heading In ‘Same Direction As China’ With Online Privacy

“The United States is walking in the same direction as China, we’re just allowing private companies to monetize left, right and center,” Cambridge Analytica whistleblower Christopher Wylie told CNBC on Wednesday. “Just because it’s not the state doesn’t mean that there isn’t harmful impacts that could come if you have one or two large companies monitoring or tracking everything you do,” he said. CNBC reports:

Wylie, whose memoir came out this week, has become outspoken about the influence of social media companies due to the large amounts of data they collect. In March 2018, he exposed the Cambridge Analytica scandal that brought down his former employer and resulted in the Federal Trade Commission fining Facebook, 15 months later, $5 billion for mishandling. While Cambridge Analytica has since shut down, Wylie said the tactics it used could be deployed elsewhere, and that is why data privacy regulation needs to be dramatically enhanced.

“Even if the company has dissolved, the capabilities of the company haven’t,” he said. “My real concern is what happens if China becomes the next Cambridge Analytica, what happens if North Korea becomes the next Cambridge Analytica?” Wylie also said he believes that social media companies should, at a minimum, face regulation similar to water utilities or electrical companies — “certain industries that have become so important because of their vital importance to business and people’s lives and the nature of their scale.” In those cases, “we put in place rules that put consumers first,” he added. “You can still make a profit. You can still make money. But you have to consider the rights and safety of people.”

620

Twitter Executive Is Also A British Army ‘Psyops’ Solider

“The senior Twitter executive with editorial responsibility for the Middle East is also a part-time officer in the British Army’s psychological warfare unit,” reports Middle East Eye:
The 77th Brigade uses social media platforms such as Twitter, Instagram and Facebook, as well as podcasts, data analysis and audience research to wage what the head of the UK military, General Nick Carter, describes as “information warfare”. Carter says the 77th Brigade is giving the British military “the capability to compete in the war of narratives at the tactical level”; to shape perceptions of conflict. Some soldiers who have served with the unit say they have been engaged in operations intended to change the behaviour of target audiences.

What exactly MacMillan is doing with the unit is difficult to determine, however: he has declined to answer any questions about his role, as has Twitter and the UK’s Ministry of Defence (MoD).

Responding to the article, the British army told Newsweek their brigade had no relationship with Twitter, “other than using Twitter as one of many social media platforms for engagement and communication.” And Twitter reminded Newsweek that “We proactively publish all tweets and accounts relating to state-backed foreign information operations on the service — regardless of the source. We built this industry-leading archive to promote better public understanding of these threats.”
Despite the assertions of Twitter and the British military, academics and researchers have said the division between the two is not so clear. David Miller — a professor of political sociology in the School for Policy Studies at England’s University of Bristol, who studies propaganda and public relations efforts concerning the British government — is one such academic. He told Newsweek he believes a link is there, adding that it was a “threat to our democracy.”

“I would say I know a good amount about army propaganda and ‘psyops’ operations as they’re called, but what is interesting is how little information we have 77th Brigade,” he said.” I suppose it means that all their work is covert, but what I would like to know is what they exactly are they doing? Are they just tracking accounts or are they trying to influence people’s views? What we do know is that their account itself is hidden by Twitter so we assume they are trying to influence people under the operatives’ own names. And because we know so little about exactly what they’re doing, we have to look elsewhere for clues for example.

The professor also argues that Twitter is deceiving us by “not acting as transparently as it could. If they are working with army personnel in this way, it is extremely damaging to our democracy. Given Twitter’s closure of accounts alleged to be used by foreign governments, it’s a very hypocritical stance of Twitter to take.”

759

Researchers Easily Breached Voting Machines For the 2020 Election

The voting machines that the U.S. will use in the 2020 election are still vulnerable to hacks. A group of ethical hackers tested a bunch of those voting machines and election systems (most of which they bought on eBay). They were able to crack into every machine, The Washington Post reports. Their tests took place this summer at a Def Con cybersecurity conference, but the group visited Washington to share their findings yesterday. A number of flaws allowed the hackers to access the machines, including weak default passwords and shoddy encryption. The group says the machines could be hacked by anyone with access to them, and if poll workers make mistakes or take shortcuts, the machines could be infiltrated by remote hackers.

662

New Deepfake Algorithm Allows You To Text-Edit the Words of a Speaker In a Video

It is now possible to take a talking-head style video, and add, delete or edit the speaker’s words as simply as you’d edit text in a word processor. A new deepfake algorithm can process the audio and video into a new file in which the speaker says more or less whatever you want them to. New Atlas reports:

It’s the work of a collaborative team from Stanford University, Max Planck Institute for Informatics, Princeton University and Adobe Research, who say that in a perfect world the technology would be used to cut down on expensive re-shoots when an actor gets something wrong, or a script needs to be changed. In order to learn the face movements of a speaker, the algorithm requires about 40 minutes of training video, and a transcript of what’s being said, so it’s not something that can be thrown onto a short video snippet and run if you want good results. That 40 minutes of video gives the algorithm the chance to work out exactly what face shapes the subject is making for each phonetic syllable in the original script.

From there, once you edit the script, the algorithm can then create a 3D model of the face making the new shapes required. And from there, a machine learning technique called Neural Rendering can paint the 3D model over with photo-realistic textures to make it look basically indistinguishable from the real thing. Other software such as VoCo can be used if you wish to generate the speaker’s audio as well as video, and it takes the same approach, by breaking down a heap of training audio into phonemes and then using that dataset to generate new words in a familiar voice.

734