Zuckerberg Says He Regrets Not Being More Outspoken About ‘Government Pressure’

In a letter to the House Judiciary Committee, Meta CEO Mark Zuckerberg expressed regret for not being more vocal about “government pressure” to censor COVID-19-related content. He also acknowledged that Meta shouldn’t have demoted a New York Post story about President Biden’s family before the 2020 election. The Hill reports:
Zuckerberg said senior Biden administration officials “repeatedly pressured” Meta, the parent company of Facebook and Instagram, to “censor” content in 2021. “I believe the government pressure was wrong, and I regret that we were not more outspoken,” he wrote to House Judiciary Chair Jim Jordan (R-Ohio). “Like I said to our teams at the time, I feel strongly that we should not compromise our content standards due to pressure from any Administration in either direction — and we’re ready to push back if something like this happens again,” Zuckerberg added.

The Meta CEO also said the company “shouldn’t have demoted” a New York Post story about corruption allegations involving President Biden’s family ahead of the 2020 election while waiting for fact-checkers to review it. The social media company has since updated its policies and processes, including no longer demoting content in the U.S. while waiting for fact-checkers, he noted. Zuckerberg also said in Monday’s letter that he does not plan to make contributions to local jurisdictions to support election infrastructure this cycle, like he did during the 2020 election.

The contributions, which were “designed to be non-partisan,” were accused of being unfairly distributed between left-leaning and right-leaning areas and labeled “Zuckerbucks” by Republicans. “Still, despite the analyses I’ve seen showing otherwise, I know that some people believe this work benefited one party over the other,” Zuckerberg said. “My goal is to be neutral and not play a role one way or another — or to even appear to be playing a role.”
House Judiciary Republicans touted the letter as a “big win for free speech,” writing on X: “Mark Zuckerberg just admitted three things: 1. Biden-Harris Admin ‘pressured’ Facebook to censor Americans. 2. Facebook censored Americans. 3. Facebook throttled the Hunter Biden laptop story.”

“Mark Zuckerberg also tells the Judiciary Committee that he won’t spend money this election cycle. That’s right, no more Zuck-bucks. Huge win for election integrity,” it added.

84

CIA covert influence operation against China

Donald Trump authorized the Central Intelligence Agency to launch a clandestine campaign on Chinese social media aimed at turning public opinion in China against its government, according to former U.S. officials with direct knowledge of the highly classified operation.

Three former officials told Reuters that the CIA created a small team of operatives who used bogus internet identities to spread negative narratives about Xi Jinping’s government while leaking disparaging intelligence to overseas news outlets. The effort, which began in 2019, has not been previously reported.

Although the U.S. officials declined to provide specific details of these operations, they said the disparaging narratives were based in fact despite being secretly released by intelligence operatives under false cover. The efforts within China were intended to foment paranoia among top leaders there, forcing its government to expend resources chasing intrusions into Beijing’s tightly controlled internet, two former officials said. “We wanted them chasing ghosts,” one of these former officials said.

Reuters was unable to determine the impact of the secret operations or whether the administration of President Joe Biden has maintained the CIA program.

Covert messaging allows the United States to implant ideas in countries where censorship might prevent that information from coming to light, or in areas where audiences wouldn’t give much credence to U.S. government statements, said Loch Johnson, a University of Georgia political scientist who studies the use of such tactics.

Covert propaganda campaigns were common during the Cold War, when the CIA planted 80 to 90 articles a day in an effort to undermine the Soviet Union, Johnson said. In the 1950s, for example, the CIA created an astrological magazine in East Germany to publish foreboding predictions about communist leaders, according to declassified records.

The covert propaganda campaign against Beijing could backfire, said Heer, the former CIA analyst. China could use evidence of a CIA influence program to bolster its decades-old accusations of shadowy Western subversion, helping Beijing “proselytize” in a developing world already deeply suspicious of Washington.

137

Why Teens Are Falling for TikTok Conspiracy Theories

TikTok–the platform skews young—reportedly one-third of its daily users in the US are 14 or younger—and celebrity gossip has long been the lingua franca of social media for people of all ages. Right-wing conspiracy groups like QAnon have been spreading made up stories about those in power on networks like Facebook for years. Now those ideas have jumped to TikTok where they’re being metabolized by much younger consumers. Those things all scan. What doesn’t, however, is why teens believe them.

The short answer? TikTok is full of crazy ideas—conspiracies are no different. They’ve been normalized by the platform where many young people spend most of their time. “Many of these conspiracy sites and stories are entertaining. They are social gathering spots. They are exciting,” says Nancy Rosenblum, Senator Joseph S. Clark Professor of Ethics in Politics and Government at Harvard University and co-author of A Lot of People Are Saying: The New Conspiracism and the Assault on Democracy. “It’s small wonder that teenagers who ‘live on the screen’ would be drawn to the drama.”

Easy access to social media’s redistribution tools worsens this problem. With every like, share, send, and retweet, teenagers are popularizing this content worldwide. “On social media, repetition substitutes for validation,” says Russel Muirhead, a professor of democracy and politics at Dartmouth College and Rosenblum’s co-author. “Repetition is what breathes air into conspiracy theories, and social media is all about repetition. But repeating something that’s false does not make it more true! Teenagers are just as vulnerable to this as grown ups.”

This wouldn’t be such a problem if teenagers weren’t so attached to social media. So fond, in fact, that some 54 percent of teens get the bulk of their news from it. If this sounds concerning, that’s because it is. With teenagers relying on TikTok as their sole source of information, it makes sense for my generation to become absorbed in webs of falsities and to live as largely uninformed citizens.

611

Facebook Targeted In UK Legal Action Over Cambridge Analytica Scandal

Facebook is being sued for failing to protect users’ personal data in the Cambridge Analytica breach. The scandal involved harvested Facebook data of 87 million people being used for advertising during elections. Mass legal action is being launched against Facebook for misuse of information from almost one million users in England and Wales. Facebook said it has not received any documents regarding this claim. The group taking action — Facebook You Owe Us — follows a similar mass action law suit against Google. Google You Owe Us, led by former director Richard Lloyd, is also active for another alleged mass data breach. Both represented by law firm Millberg London, the Google case is being heard in the Supreme Court in April next year.

The Facebook case will argue that by taking data without consent, the firm failed to meet their legal obligations under the Data Protection Act 1998. Representative claimant in the case Alvin Carpio said: “When we use Facebook, we expect that our personal data is being used responsibly, transparently, and legally. By failing to protect our personal information from abuse, we believe that Facebook broke the law. Paying less than 0.01% of your annual revenue in fines — pocket change to Facebook — is clearly a punishment that does not fit the crime. Apologizing for breaking the law is simply not enough. Facebook, you owe us honesty, responsibility and redress. We will fight to hold Facebook to account.”

503

Facebook and Google Serve As Vectors For Misinformation While Hobbling Local Journalism and Collecting Taxpayer Subsidies, Group Says

Facebook and Google are hollowing out local communities by serving as vectors for misinformation while hobbling local journalism and collecting taxpayer subsidies, a new paper from progressive think tank the American Economic Liberties Project charges. Both companies cite benefits their platforms offer small businesses as a key defense against critiques of their size and power. The paper, dated Aug. 30, is sure to presage further scrutiny of the impact they’ve had on local communities.

The brief, by Pat Garofalo, the group’s director of state and local policy, argues that: Google doesn’t do enough to protect against fraud, allowing scammers to get their own numbers and websites listed on Google to the detriment of legitimate businesses. Facebook, by design, boosts shoddy and sensationalist content, crowding out legitimate local news and information, all as it and Google have come to dominate the local advertising market that was long the lifeblood of community journalism. Both have sucked up potentially billions in local taxpayer dollars via tax breaks as well as subsidies and discounts on utilities they’ve gotten in exchange for building data centers. Garofalo recommends remedies including more antitrust enforcement at the federal and state levels and an end to preferential treatment by states and localities, either voluntarily or under force of law.

543

Google Search and Dark Patterns

Previously, the search engine had marked paid results with the word “Ad” in a green box, tucked beneath the headline next to a matching green display URL. Now, all of a sudden, the “Ad” and the URL shifted above the headline, and both were rendered in discreet black; the box disappeared. The organic search results underwent a similar makeover, only with a new favicon next to the URL instead of the word “Ad.” The result was a general smoothing: Ads looked like not-ads. Not-ads looked like ads. This was not Google’s first time fiddling with the search results interface. In fact, it had done so quite regularly over the last 13 years, as handily laid out in a timeline from the news site Search Engine Land. Each iteration whittled away the distinction between paid and unpaid content that much more. Most changes went relatively unnoticed, internet residents accepting the creep like the apocryphal frog in a slowly boiling pot.

But in January, amid rising antitrust drumbeats and general exhaustion with Big Tech, people noticed. Interface designers, marketers, and Google users alike decried the change, saying it made paid results practically indistinguishable from those that Google’s search algorithm served up organically. The phrase that came up most often: “dark pattern,” a blanket term coined by UX specialist Harry Brignull to describe manipulative design elements that benefit companies over their users. That a small design tweak could inspire so much backlash speaks to the profound influence Google and other ubiquitous platforms have — and the responsibility that status confers to them. “Google and Facebook shape realities,” says Kat Zhou, a product designer who has created a framework and toolkit to help promote ethical design. “Students and professors turn to Google for their research. Folks turn to Facebook for political news. Communities turn to Google for Covid-19 updates. In some sense, Google and Facebook have become arbiters of the truth. That’s particularly scary when you factor in their business models, which often incentivize blurring the line between news and advertisements.”

Google’s not the only search engine to blur this line. If anything, Bing is even more opaque, sneaking the “Ad” disclosure under the header, with only a faint outline to draw attention. […] But Google has around 92 percent of global search marketshare. It effectively is online search. Dark patterns are all too common online in general, and January wasn’t the first time people accused Google of deploying them. In June of 2018, a blistering report from the Norwegian Consumer Council found that Google and Facebook both used specific interface choices to strip away user privacy at almost every turn. The study details how both platforms implemented the least privacy-friendly options by default, consistently “nudged” users toward giving away more of their data, and more. It paints a portrait of a system designed to befuddle users into complacency. […] That confusion reached its apex a few months later, when an Associated Press investigation found that disabling Location History on your smartphone did not, in fact, stop Google from collecting your location in all instances.

595

Facebook, Twitter Shutter Pro-Trump Network That Used AI To Create Fake People and Push Conspiracies

On Friday, Facebook and Twitter shut down a network of fake accounts that pushed pro-Trump messages all while “masquerading” as Americans with AI-generated faces as profile photos.

QIn a blog post, Facebook said that it connected the accounts to a US-based media company called The BL that, it claims, has ties to Epoch Media Group. In August, NBC News first reported that Epoch Media Group was pushing messages in support of President Donald Trump across social media platforms like Facebook and Twitter. Epoch has extensive connections to Falun Gong, an eccentric Chinese spiritual community that has faced significant persecution from the country’s central government. In a statement provided to The Verge, the Epoch Times denied any connection to The BL.

Facebook noted that many of the fake accounts used in the latest campaign employed false profile photos that appeared to have been generated by artificial intelligence. Those accounts would post BL content in other Facebook groups while pretending to be Americans. Pro-Trump messages were often posted “at very high frequencies” and linked to off-platform sites belonging to the BL and The Epoch Times. The accounts and pages were managed by individuals in the US and Vietnam. Facebook said that it removed 610 accounts, 89 Facebook pages, 156 groups, and 72 Instagram accounts that were connected to the organization. Around 55 million accounts followed one of these Facebook pages and 92,000 followed at least one of the Instagram accounts. The organization spent nearly $9.5 million in advertisements, according to Facebook.

642

The Rise of the Deepfake and the threat to Democracy

Deepfakes posted on the internet in the past two years, has alarmed many observers, who believe the technology could be used to disgrace politicians and even swing elections. Democracies appear to be gravely threatened by the speed at which disinformation can be created and spread via social media, where the incentive to share the most sensationalist content outweighs the incentive to perform the tiresome work of verification.

Last month, a digitally altered video showing Nancy Pelosi, the speaker of the US House of Representatives, appearing to slur drunkenly through a speech was widely shared on Facebook and YouTube. Trump then posted the clip on Twitter with the caption: “PELOSI STAMMERS THROUGH NEWS CONFERENCE”. The video was quickly debunked, but not before it had been viewed millions of times; the president did not delete his tweet, which at the time of writing has received nearly 98,000 likes. Facebook declined to take down the clip, qualifying its decision with the statement: “Once the video was fact-checked as false, we dramatically reduced its distribution.”

In response, a team including the artists Bill Posters and Daniel Howe two weeks ago posted a video on Instagram, in which Facebook founder Mark Zuckerberg boasts that he has “total control of billions of people’s stolen data, all their secrets, their lives, their futures”.

In May 2018, a Flemish socialist party called sp.a posted a deepfake video to its Twitter and Facebook pages showing Trump appearing to taunt Belgium for remaining in the Paris climate agreement. The video, which remains on the party’s social media, is a poor forgery: Trump’s hair is curiously soft-focus, while his mouth moves with a Muppet-like elasticity. Indeed, the video concludes with Trump saying: “We all know that climate change is fake, just like this video,” although this sentence alone is not subtitled in Flemish Dutch. (The party declined to comment, but a spokesperson previously told the site Politico that it commissioned the video to “draw attention to the necessity to act on climate change”.)

But James [founder of the YouTube channel derpfakes’ that publishes deepfake videos] believes forgeries may have gone undetected. “The idea that deepfakes have already been used politically isn’t so farfetched,” he says. “It could be the case that deepfakes have already been widely used for propaganda.”

701

YouTube’s Algorithm Made Fake CNN Reports Go Viral

“YouTube channels posing as American news outlets racked up millions of views on false and inflammatory videos over several months this year,” reports CNN.

“All with the help of YouTube’s recommendation engine.”

Many of the accounts, which mostly used footage from CNN, but also employed some video from Fox News, exploited a YouTube feature that automatically creates channels on certain topics. Those topic channels are then automatically populated by videos related to the topic — including, in this case, blatant misinformation.

YouTube has now shut down many of the accounts.

YouTube’s own algorithms also recommended videos from the channels to American users who watched videos about U.S. politics. That the channels could achieve such virality — one channel was viewed more than two million times over one weekend in October — raises questions about YouTube’s preparedness for tackling misinformation on its platform just weeks before the Iowa caucuses and points to the continuing challenge platforms face as people try to game their systems….

Responding to the findings on Thursday, a CNN spokesperson said YouTube needs to take responsibility.

“When accounts were deleted or banned, they were able to spin up new accounts within hours,” added Plasticity, a natural language processing and AI startup which analyzed the data and identified at least 25 different accounts which YouTube then shut down.

“The tactics they used to game the YouTube algorithm were executed perfectly. They knew what they were doing.”

630

Digital India: Government Hands Out Free Phones to Win Votes

In the state of Chhattisgarh, the chief minister, Raman Singh, has promised a smartphone in every home — and he is using the government-issued devices to reach voters as he campaigns in legislative elections that conclude on Tuesday.

The phones are the latest twist in digital campaigning by the B.J.P., which controls the national and state government and is deft at using tools like WhatsApp groups and Facebook posts to influence voters. The B.J.P. government in Rajasthan, which holds state elections next month, is also subsidizing phones and data plans for residents, and party leaders are considering extending the model to other states.

801

Police Bodycams Can Be Hacked To Doctor Footage, Install Malware

Josh Mitchell’s Defcon presentation analyzes the security of five popular brands of police bodycams (Vievu, Patrol Eyes, Fire Cam, Digital Ally, and CeeSc) and reveals that they are universally terrible. All the devices use predictable network addresses that can be used to remotely sense and identify the cameras when they switch on. None of the devices use code-signing. Some of the devices can form ad-hoc Wi-Fi networks to bridge in other devices, but they don’t authenticate these sign-ons, so you can just connect with a laptop and start raiding the network for accessible filesystems and gank or alter videos, or just drop malware on them.

812
Stare Into The Lights My Pretties

Children ‘at risk of robot influence’

Forget peer pressure, future generations are more likely to be influenced by robots, a study suggests.

The research, conducted at the University of Plymouth, found that while adults were not swayed by robots, children were.

The fact that children tended to trust robots without question raised ethical issues as the machines became more pervasive, said researchers.

They called for the robotics community to build in safeguards for children.

Those taking part in the study completed a simple test, known as the Asch paradigm, which involved finding two lines that matched in length.

Known as the conformity experiment, the test has historically found that people tend to agree with their peers even if individually they have given a different answer.

In this case, the peers were robots. When children aged seven to nine were alone in the room, they scored an average of 87% on the test. But when the robots joined them, their scores dropped to 75% on average. Of the wrong answers, 74% matched those of the robots.

“If robots can convince children (but not adults) that false information is true, the implication for the planned commercial exploitation of robots for childminding and teaching is problematic.”

817