Resources

Zuckerberg Acknowledges ‘Risk of Civil Unrest’ After US Elections, Promises Newsfeed Updates, too little too late

Facebook founder and CEO Mark Zuckerberg told analysts on a conference call Thursday evening that the company plans to post notices at the top of users’ news feeds on November 3rd discrediting any claims by either candidate in the U.S. presidential election that they have won the election if the site deems the claim premature… The move, said Zuckerberg, is being made because “There is a risk of civil unrest across the country, and given this, companies like ours need to go well beyond what we’ve done before.”

The conference call with analysts followed a third-quarter earnings report Thursday afternoon in which Facebook’s results topped expectations, helped by gains in active users that also were higher than Wall Street expected.

Zuckerberg said Facebook “helped 4.4 million people register [to vote] exceeding the goal that we set for ourselves this summer.”

435

deletefacebook.com

A comprehensive resource to help you delete Facebook.

419

The Left and the Right Speak Different Languages—Literally

A study analyzing patterns in online comments found that liberals and conservatives use different words to express similar ideas.

Researchers at Carnegie Mellon University collected more than 86.6 million comments from more than 6.5 million users on 200,000 YouTube videos, then analyzed them using an AI technique normally employed to translate between two languages.

The researchers found that people on opposing sides of the political divide often use different words to express similar ideas. For instance, the term “mask” among liberal commenters is roughly equivalent to the term “muzzle” for conservatives. Similar pairings were seen for “liberals” and “libtards” as well as “solar” and “fossil.”

“We are practically speaking different languages—that’s a worrisome thing,” KhudaBukhsh says. “If ‘mask’ translates to ‘muzzle,’ you immediately know that there is a huge debate surrounding masks and freedom of speech.”

In the case of politically tinged comments, the researchers found that different words occupy a similar place in the lexicon of each community. The paper, which has been posted online but is not yet peer reviewed, looked at comments posted beneath the videos on four channels spanning left- and right-leaning US news—MSNBC, CNN, Fox News, and OANN.

KhudaBukhsh says social networks might use techniques like the one his team developed to build bridges between warring communities. A network could surface comments that avoid contentious or “foreign” terms, instead showing ones that represent common ground, he suggests. “Go to any social media platform; it has become so toxic, and it’s almost like there is no known interaction” between users with different political viewpoints, he says.

But Morteza Dehghani, an associate professor at the University of Southern California who studies social media using computational methods, finds the approach problematic. He notes that the Carnegie Mellon paper considers “BLM” (Black Lives Matter) and “ALM” (all lives matter) a “translatable” pair, akin to “mask” and “muzzle.”

“BLM and ALM are not translations of each other,” he says. “One makes salient centuries of slavery, abuse, racism, discrimination, and fights for justice, while the other one tries to erase this history.”

Dehghani says it would be a mistake to use computational methods that oversimplify issues and lack nuance. “What we need is not machine translation,” he says. “What we need is perspective-taking and explanation—two things that AI algorithms are notoriously bad at.”

403

Long Before Cambridge Analytica, Simulmatics Linked Data and Politics

NPR reporter Shannon Bond reports of a little-known — and now nearly entirely forgotten — company called Simulmatics, which had technology that used vast amounts of data to profile voters and ultimately help John F. Kennedy win the 1960 election. From the report:
The […] company was called Simulmatics, the subject of Harvard historian and New Yorker writer Jill Lepore’s timely new book, If Then: How the Simulmatics Corporation Invented the Future. Before Cambridge Analytica, before Facebook, before the Internet, there was Simulmatics’ “People Machine,” in Lepore’s telling: “A computer program designed to predict and manipulate human behavior, all sorts of human behavior, from buying a dishwasher to countering an insurgency to casting a vote.”

Lepore unearths Simulmatics’ story and makes the argument that, amid a broader proliferation of behavioral science research across academia and government in the 1960s, the company paved the way for our 21st-century obsession with data and prediction. Simulmatics, she argues, is “a missing link in the history of technology,” the antecedent to Facebook, Google and Amazon and to algorithms that attempt to forecast who will commit crimes or get good grades. “It lurks behind the screen of every device,” she writes.

If Then presents Simulmatics as both ahead of its time and, more often than not, overpromising and under-delivering. The company was the brainchild of Ed Greenfield, an advertising executive straight out of Mad Men, who believed computers could help Democrats recapture the White House. He wanted to create a model of the voting population that could tell you how voters would respond to whatever a candidate did or said. The name Simulmatics was a contraction of “simulation” and “automation.” As Greenfield explained it to investors, Lepore writes: “The Company proposes to engage principally in estimating probable human behavior by the use of computer technology.” The People Machine was originally built to analyze huge amounts of data ahead of the 1960 election, in what Lepore describes as, at the time, “the largest political science research project in American history.”

475

Trolls Are Swarming Young Climate Activists Online

On the morning of August 25, 11-year-old Lilly Platt tweeted a video clip of a Brazilian Amazon tribe speaking out against deforestation. Awareness of the Amazon wildfires was already at a fever pitch, and the tweet exploded. Then, within an hour, a swarm of troll accounts started flooding her mentions with porn. Shortly after the attack, her mom, Eleanor Platt, made an online plea for help: “Dear Friends of Lilly, this is Lillys mum she is being targeted by revolting trolls who are spamming her feed with pornography. There is only so much i can do to block this. Please if you see these posts report them.” Over the course of the day, some of Lilly’s nearly 10,000 followers did just that.

Young girls like Lilly, who has been striking in her hometown of Utrecht, Netherlands, every Friday for the last year, are overwhelmingly leading a growing global movement to draw attention to the climate crisis. They spurred an estimated 4 million people across seven continents to walk out of work and school on September 20 — and they are getting attacked for it. They have faced a barrage of daily insults, seemingly coordinated attacks (like the one that targeted Lilly), creepy DMs, doxing, hacked accounts, and death threats. This is the new normal for young climate leaders online, according to BuzzFeed News interviews with nearly a dozen of the kids and their parents.

Personal attacks have always been a part of the climate denial playbook, even as fossil fuel companies secretly funded campaigns and researchers to question the scientific consensus on climate change. The most famous incident, 2009’s Climategate, involved scientists getting their emails hacked and then facing death threats. And as the politics of climate change begins to mirror the broader dark trends of global politics, weaponized social media — in the form of intimidation, memes, and disinformation — has emerged as the dominant vehicle for climate denial. But the rise of a new climate movement means there’s now a much more visible — and especially vulnerable — target: kids.

456

To Keep Trump From Violating Its Rules…Facebook Rewrote the Rules

After Trump’s infamous “the shooting starts” post, Facebook deputies contacted the White House “with an urgent plea to tweak the language of the post or simply delete it,” the article reveals, after which Trump himself called Mark Zuckerberg. (The article later notes that historically Facebook makes a “newsworthiness exception” for some posts which it refuses to remove, “determined on a case-by-case basis, with the most controversial calls made by Zuckerberg.”) And in the end, Facebook also decided not to delete that post — and says now that even Friday’s newly-announced policy changes still would not have disqualified the post:
The frenzied push-pull was just the latest incident in a five-year struggle by Facebook to accommodate the boundary-busting ways of Trump. The president has not changed his rhetoric since he was a candidate, but the company has continually altered its policies and its products in ways certain to outlast his presidency. Facebook has constrained its efforts against false and misleading news, adopted a policy explicitly allowing politicians to lie, and even altered its news feed algorithm to neutralize claims that it was biased against conservative publishers, according to more than a dozen former and current employees and previously unreported documents obtained by The Washington Post. One of the documents shows it began as far back as 2015…

The concessions to Trump have led to a transformation of the world’s information battlefield. They paved the way for a growing list of digitally savvy politicians to repeatedly push out misinformation and incendiary political language to billions of people. It has complicated the public understanding of major events such as the pandemic and the protest movement, as well as contributed to polarization. And as Trump grew in power, the fear of his wrath pushed Facebook into more deferential behavior toward its growing number of right-leaning users, tilting the balance of news people see on the network, according to the current and former employees…

Facebook is also facing a slow-burning crisis of morale, with more than 5,000 employees denouncing the company’s decision to leave Trump’s post that said, “when the looting starts, the shooting starts,” up… The political speech carveout ended up setting the stage for how the company would handle not only Trump, but populist leaders around the world who have posted content that test these boundaries, such as Rodrigo Duterte in the Philippines, Jair Bolsonaro in Brazil and Narendra Modi in India…

“The value of being in favor with people in power outweighs almost every other concern for Facebook,” said David Thiel, a Facebook security engineer who resigned in March after his colleagues refused to remove a post he believed constituted “dehumanizing speech” by Brazil’s president.

547

In Fast-Moving Pandemic, Sources of Falsehoods Spread by Text, Email, WhatsApp and TikTok

Misleading text messages claiming that President Trump was going to announce a national quarantine buzzed into cellphones across the country over the weekend, underscoring how rapidly false claims are spreading — and how often it is happening beyond the familiar misinformation vehicles of Facebook and Twitter. The false texts spread so widely that on Sunday night the White House’s National Security Council, fearing the texts were an attempt to spook the stock market as it opened Monday, decided to directly debunk the misleading claims in a Twitter post: “Text message rumors of a national #quarantine are FAKE. There is no national lockdown.” But by then the messages already had spread widely, as had similar ones both in the United States and Europe in recent days. Text messages, encrypted communication apps such as WhatsApp and some social media platforms have carried similarly alarming misinformation, much of it with the apparent goal of spurring people to overrun stores to buy basic items ahead of a new wave of government restrictions.

The one claiming that Trump was going to impose a national quarantine included the advice: “Stock up on whatever you guys need to make sure you have a two week supply of everything. Please forward to your network.” In fact, authorities have warned against aggressive buying that could disrupt supply chains and fuel panic. Trump addressed the misleading text messages at an afternoon news conference Monday, saying, “It could be that you have some foreign groups that are playing games.” On the possibility of a national quarantine, Trump said: “We haven’t determined to do that at all. … Hopefully we won’t have to.”

581

Surveillance Company Banjo Used a Secret Company and Fake Apps To Scrape Social Media

Banjo, an artificial intelligence firm that works with police used a shadow company to create an array of Android and iOS apps that looked innocuous but were specifically designed to secretly scrape social media. The news signifies an abuse of data by a government contractor, with Banjo going far beyond what companies which scrape social networks usually do. Banjo created a secret company named Pink Unicorn Labs, according to three former Banjo employees, with two of them adding that the company developed the apps. This was done to avoid detection by social networks, two of the former employees said.

Three of the apps created by Pink Unicorn Labs were called “One Direction Fan App,” “EDM Fan App,” and “Formula Racing App.” Motherboard found these three apps on archive sites and downloaded and analyzed them, as did an independent expert. The apps — which appear to have been originally compiled in 2015 and were on the Play Store until 2016 according to Google — outwardly had no connection to Banjo, but an analysis of its code indicates connections to the company. This aspect of Banjo’s operation has some similarities with the Cambridge Analytica scandal, with multiple sources comparing the two incidents. […] The company has not publicly explained how it specifically scrapes social media apps. Motherboard found the apps developed by Pink Unicorn Labs included code mentioning signing into Facebook, Twitter, Instagram, Russian social media app VK, FourSquare, Google Plus, and Chinese social network Sina Weibo.
The apps could have scraped social media “by sending the saved login token to a server for Banjo to use later, or by using the app itself to scrape information,” reports Motherboard, noting that it’s not entirely clear which method Banjo used. “Motherboard found that the apps when opened made web requests to the domain ‘pulapi.com,’ likely referring to Pink Unicorn Labs, but the site that would provide a response to the app is currently down.”

Last weekend, Motherboard reported that Banjo signed a $20.7 million contract with Utah in 2019 that granted the company access to the state’s traffic, CCTV, and public safety cameras. “Banjo promises to combine that input with a range of other data such as satellites and social media posts to create a system that it claims alerts law enforcement of crimes or events in real-time.”

598

Mozilla Hits Google, Facebook For ‘Microtargeting’ Political Ads

Microtargeting, a method which uses consumer data and demographics to narrowly segment audiences, is used by political campaigns to specialize ads for different voting groups. The practice’s critics include Federal Election Commission Chairwoman Ellen Weintraub, who wrote in a Washington Post op-ed that microtargeting makes it “easy to single out susceptible groups and direct political misinformation to them with little accountability, because the public at large never sees the ad.” Mozilla’s call follows reports that Facebook has considered restricting politicians’ access to microtargeting.

573

Spain and GitHub Are Blocking an App That Helped Protesters Organize

For the last month, hundreds of thousands of people have joined demonstrations in Spain to voice their objection to the jailing of Catalan separatist leaders and support Catalonian independence. As with almost all modern activist and public protest movements, activists are using social media and apps to communicate with and organize public actions. But this week, in a move that puts the Spanish government on par with censorship-heavy places like China and Russia, the country requested that Github block access to one of those apps, by revoking local access to its Github repository. Github, which is owned by Microsoft, complied with the order.

According to Spanish news outlet El Confidencial, last week the government ordered takedowns of websites and app made by Tsunami Democratic, an activist group organizing protests in the region. To try to keep access to the app download alive, Tsunami Democratic moved the .apk file to Github. But the government shut that down, too, blocking the site in Spain. Motherboard tested the download using a VPN, and the Github repo was blocked from Madrid.

565

When Myspace Was King, Employees Abused a Tool Called ‘Overlord’ to Spy on Users

During the social network’s heyday, multiple Myspace employees abused an internal company tool to spy on users, in some cases including ex-partners, Motherboard reported on Monday.

Named ‘Overlord,’ the tool allowed employees to see users’ passwords and their messages, two former employees said. While the tool was originally designed to help moderate the platform and allow MySpace to comply with law enforcement requests, multiple sources said the tool was used for illegitimate purposes by employees who accessed Myspace user data without authorization to do so. “It was basically an entire backdoor to the Myspace platform,” one of the former employees said of Overlord. (Motherboard granted five former Myspace employees anonymity to discuss internal Myspace incidents.) The abuse happened about a decade ago, closer to the height of the platform’s popularity, according to multiple sources. In fall 2006, the platform signed up its 100 millionth user. Around this time, Myspace was the second most popular website in the U.S., and ranked higher than Google search.

591

Pentagon Wants to Predict Anti-Trump Protests Using Social Media Surveillance

A series of research projects, patent filings, and policy changes indicate that the Pentagon wants to use social media surveillance to quell domestic insurrection and rebellion.

The social media posts of American citizens who don’t like President Donald Trump are the focus of the latest US military-funded research. The research, funded by the US Army and co-authored by a researcher based at the West Point Military Academy, is part of a wider effort by the Trump administration to consolidate the US military’s role and influence on domestic intelligence.

The vast scale of this effort is reflected in a number of government social media surveillance patents granted this year, which relate to a spy program that the Trump administration outsourced to a private company last year. Experts interviewed by Motherboard say that the Pentagon’s new technology research may have played a role in amendments this April to the Joint Chiefs of Staff homeland defense doctrine, which widen the Pentagon’s role in providing intelligence for domestic “emergencies,” including an “insurrection.”

It’s no secret that the Pentagon has funded Big Data research into how social media surveillance can help predict large-scale population behaviours, specifically the outbreak of conflict, terrorism, and civil unrest.

Much of this research focuses on foreign theatres like the Middle East and North Africa — where the 2011 Arab Spring kicked off an arc of protest that swept across the region and toppled governments.

Since then, the Pentagon has spent millions of dollars finding patterns in posts across platforms like Facebook, Twitter, Instagram, Tumblr, and beyond to enable the prediction of major events.

But the Pentagon isn’t just interested in anticipating surprises abroad. The research also appears to be intended for use in the US homeland.

Datasets for the research were collected using the Apollo Social Sensing Tool, a real-time event tracking software that collects and analyses millions of social media posts.

The tool was originally developed under the Obama administration back in 2011 by the US Army Research Laboratory and US Defense Threat Reduction Agency, in partnership with Rensselaer Polytechnic Institute, the University of Illinois, IBM, and Caterva (a social marketing company that in 2013 was folded into a subsidiary of giant US government IT contractor, CSC). Past papers associated with the project show that the tool has been largely tested in foreign theatres like Haiti, Egypt, and Syria.

But the use of the Apollo tool to focus on protests in the US homeland has occurred under the Trump administration. The ‘election’ dataset compiled using Apollo for the 2018 US Army-funded study is comprised of 2.5 million tweets sent between October 26, 2016, and December 20, 2016, using the words “Trump”, “Clinton,” and “election.”

Tweets were geolocated to focus on “locations where protests occurred following the election” based on user profiles. Locations were then triangulated against protest data from “online news outlets across the country.”

The millions of tweets were used to make sense of the “frequencies of the protests in 39 cities” using 18 different ways of measuring the “size, structure and geography” of a network, along with two ways of measuring how that network leads a social group to become “mobilized,” or take action.

In short, this means that “the social network can be a predictor of mobilization, which in turn is a predictor of the protest.” This pivotal finding means that extensive real-time monitoring of American citizens’ social media activity can be used to predict future protests.

667

Most Americans say they can’t tell the difference between a social media bot and a human

A new study from Pew Research Center found that most Americans can’t tell social media bots from real humans, and most are convinced bots are bad. “Only 47 percent of Americans are somewhat confident they can identify social media bots from real humans,” reports The Verge. “In contrast, most Americans surveyed in a study about fake news were confident they could identify false stories.”

The Pew study is an uncommon look at what the average person thinks about these automated accounts that plague social media platforms. After surveying over 4,500 adults in the U.S., Pew found that most people actually don’t know much about bots. Two-thirds of Americans have at least heard of social media bots, but only 16 percent say they’ve heard a lot about them, while 34 percent say they’ve never heard of them at all. The knowledgeable tend to be younger, and men are more likely than women (by 22 percentage points) to say they’ve heard of bots. Since the survey results are self-reported, there’s a chance people are overstating or understating their knowledge of bots. Of those who have heard of bots, 80 percent say the accounts are used for bad purposes.

Regardless of whether a person is a Republican or Democrat or young or old, most think that bots are bad. And the more that a person knows about social media bots, the less supportive they are of bots being used for various purposes, like activists drawing attention to topics or a political party using bots to promote candidates.

637

Social Media Manipulation Rising Globally, New Oxford Report Warns

A new report from Oxford University found that manipulation of public opinion over social media platforms is growing at a large scale, despite efforts to combat it. “Around the world, government agencies and political parties are exploiting social media platforms to spread junk news and disinformation, exercise censorship and control, and undermine trust in media, public institutions and science.”

“The number of countries where formally organized social media manipulation occurs has greatly increased, from 28 to 48 countries globally,” says Samantha Bradshaw, co-author of the report. “The majority of growth comes from political parties who spread disinformation and junk news around election periods. There are more political parties learning from the strategies deployed during Brexit and the U.S. 2016 Presidential election: more campaigns are using bots, junk news, and disinformation to polarize and manipulate voters.”

This is despite efforts by governments in many democracies introducing new legislation designed to combat fake news on the internet. “The problem with this is that these ‘task forces’ to combat fake news are being used as a new tool to legitimize censorship in authoritarian regimes,” says Professor Phil Howard, co-author and lead researcher on the OII’s Computational Propaganda project. “At best, these types of task forces are creating counter-narratives and building tools for citizen awareness and fact-checking.” Another challenge is the evolution of the mediums individuals use to share news and information. “There is evidence that disinformation campaigns are moving on to chat applications and alternative platforms,” says Bradshaw. “This is becoming increasingly common in the Global South, where large public groups on chat applications are more popular.”

715

YouTube, YouTubers and You

746

How Silicon Valley divided society and made everyone raging mad

“Silicon Valley’s utopians genuinely but mistakenly believe that more information and connection makes us more analytical and informed. But when faced with quinzigabytes of data, the human tendency is to simplify things. Information overload forces us to rely on simple algorithms to make sense of the overwhelming noise. This is why, just like the advertising industry that increasingly drives it, the internet is fundamentally an emotional medium that plays to our base instinct to reduce problems and take sides, whether like or don’t like, my guy/not my guy, or simply good versus evil. It is no longer enough to disagree with someone, they must also be evil or stupid…

Nothing holds a tribe together like a dangerous enemy. That is the essence of identity politics gone bad: a universe of unbridgeable opinion between opposing tribes, whose differences are always highlighted, exaggerated, retweeted and shared. In the end, this leads us to ever more distinct and fragmented identities, all of us armed with solid data, righteous anger, a gutful of anger and a digital network of likeminded people. This is not total connectivity; it is total division.”

784

What Makes You Click (2016)

“The biggest psychological experiment ever is being conducted, and we’re all taking part in it: every day, a billion people are tested online. Which ingenious tricks and other digital laws ensure that we fill our online shopping carts to the brim, or stay on websites as long as possible? Or vote for a particular candidate?

The bankruptcies of department stores and shoe shops clearly show that our buying behaviour is rapidly shifting to the Internet. An entirely new field has arisen, of ‘user experience’ architects and ‘online persuasion officers’. How do these digital data dealers use, manipulate and abuse our user experience? Not just when it comes to buying things, but also with regards to our free time and political preferences.

Aren’t companies, which are running millions of tests at a time, miles ahead of science and government, in this respect? Now the creators of these digital seduction techniques, former Google employees among them, are themselves arguing for the introduction of an ethical code. What does it mean, when the conductors of experiments themselves are asking for their power and possibilities to be restricted?”

1184

Facebook: Cracking the Code (2017)

“What’s on your mind?” It’s the friendly Facebook question which lets you share how you’re feeling. It’s also the question that unlocks the details of your life and helps turn your thoughts into profits.

Facebook has the ability to track much of your browsing history, even when you’re not logged on, and even if you aren’t a member of the social network at all. This is one of the methods used to deliver targeted advertising and ‘news’ to your Facebook feed. This is why you are unlikely to see anything that challenges your world view.

This feedback loop is fuelling the rise and power of ‘fake news’. “We’re seeing news that’s tailored ever more tightly towards those kinds of things that people will click on, and will share, rather than things that perhaps are necessarily good for them”, says one Media Analyst.

This information grants huge power to those with access to it. Republican Party strategist Patrick Ruffini says, “What it does give us is much greater level of certainty and granularity and precision down to the individual voter, down to the individual precinct about how things are going to go”. Resultantly, former Facebook journalist, Adam Schrader thinks that there’s “a legitimate argument to this that Facebook influenced the election, the United States Election results.

848

Social Media are “driving Americans insane”

“If you pull out your phone to check Twitter while waiting for the light to change, or read e-mails while brushing your teeth, you might be what the American Psychological Association calls a “constant checker.” And chances are, it’s hurting your mental health.

Last week, the APA released a study finding that Americans were experiencing the first statistically significant stress increase in the survey’s 10-year history. In January, 57 percent of respondents of all political stripes said the U.S. political climate was a very or somewhat significant source of stress, up from 52 percent who said the same thing in August. On Thursday, the APA released the second part of its 1 findings, “Stress In America: Coping With Change,” examining the role technology and social media play in American stress levels.

Social media use has skyrocketed from 7 percent of American adults in 2005 to 65 percent in 2015. For those in the 18-29 age range, the increase is larger, from 12 percent to a remarkable 90 percent. But while an increase in social media usage is hardly surprising, the number of people who just can’t tear themselves away is stark: Nowadays, 43 percent of Americans say they are checking their e-mails, texts, or social media accounts constantly. And their stress levels are paying for it: On a 10-point scale, constant checkers reported an average stress level of 5.3. For the rest of Americans, the average level is a 4.4.

If the first step toward recovery, however, is admitting there is a problem, Americans are on their way. Some 65 percent of respondents said “unplugging” or taking a “digital detox” is important. But alas, knowing you have a problem is not the same as fixing it: Only 28 percent of those Americans say they take their own advice.

699
Stare Into The Lights My Pretties

“Your browsing history alone can give away your identity”

“Researchers at Stanford and Princeton universities have found a way to connect the dots between people’s private online activity and their Twitter accounts—even for people who have never tweeted.

When the team tested the technique on 400 real people who submitted their browsing history, they were able to correctly pick out the volunteers’ Twitter profiles nearly three-quarters of the time.

Here’s how the de-anonymization system works: The researchers figured that a person is more likely to click a link that was shared on social media by a friend—or a friend of a friend—than any other random link on the internet. (Their model controls for the baseline popularity of each website.) With that in mind, and the details of an anonymous person’s browser history in hand, the researchers can compute the probability that any one Twitter user created that browsing history. People’s basic tendency to follow links they come across on Twitter unmasks them—and it usually takes less than a minute.

“You can even be de-anonymized if you just browse and follow people, without actually sharing anything.”

791