Resources

Mozilla Hits Google, Facebook For ‘Microtargeting’ Political Ads

Microtargeting, a method which uses consumer data and demographics to narrowly segment audiences, is used by political campaigns to specialize ads for different voting groups. The practice’s critics include Federal Election Commission Chairwoman Ellen Weintraub, who wrote in a Washington Post op-ed that microtargeting makes it “easy to single out susceptible groups and direct political misinformation to them with little accountability, because the public at large never sees the ad.” Mozilla’s call follows reports that Facebook has considered restricting politicians’ access to microtargeting.

Spain and GitHub Are Blocking an App That Helped Protesters Organize

For the last month, hundreds of thousands of people have joined demonstrations in Spain to voice their objection to the jailing of Catalan separatist leaders and support Catalonian independence. As with almost all modern activist and public protest movements, activists are using social media and apps to communicate with and organize public actions. But this week, in a move that puts the Spanish government on par with censorship-heavy places like China and Russia, the country requested that Github block access to one of those apps, by revoking local access to its Github repository. Github, which is owned by Microsoft, complied with the order.

According to Spanish news outlet El Confidencial, last week the government ordered takedowns of websites and app made by Tsunami Democratic, an activist group organizing protests in the region. To try to keep access to the app download alive, Tsunami Democratic moved the .apk file to Github. But the government shut that down, too, blocking the site in Spain. Motherboard tested the download using a VPN, and the Github repo was blocked from Madrid.

When Myspace Was King, Employees Abused a Tool Called ‘Overlord’ to Spy on Users

During the social network’s heyday, multiple Myspace employees abused an internal company tool to spy on users, in some cases including ex-partners, Motherboard reported on Monday.

Named ‘Overlord,’ the tool allowed employees to see users’ passwords and their messages, two former employees said. While the tool was originally designed to help moderate the platform and allow MySpace to comply with law enforcement requests, multiple sources said the tool was used for illegitimate purposes by employees who accessed Myspace user data without authorization to do so. “It was basically an entire backdoor to the Myspace platform,” one of the former employees said of Overlord. (Motherboard granted five former Myspace employees anonymity to discuss internal Myspace incidents.) The abuse happened about a decade ago, closer to the height of the platform’s popularity, according to multiple sources. In fall 2006, the platform signed up its 100 millionth user. Around this time, Myspace was the second most popular website in the U.S., and ranked higher than Google search.

Pentagon Wants to Predict Anti-Trump Protests Using Social Media Surveillance

A series of research projects, patent filings, and policy changes indicate that the Pentagon wants to use social media surveillance to quell domestic insurrection and rebellion.

The social media posts of American citizens who don’t like President Donald Trump are the focus of the latest US military-funded research. The research, funded by the US Army and co-authored by a researcher based at the West Point Military Academy, is part of a wider effort by the Trump administration to consolidate the US military’s role and influence on domestic intelligence.

The vast scale of this effort is reflected in a number of government social media surveillance patents granted this year, which relate to a spy program that the Trump administration outsourced to a private company last year. Experts interviewed by Motherboard say that the Pentagon’s new technology research may have played a role in amendments this April to the Joint Chiefs of Staff homeland defense doctrine, which widen the Pentagon’s role in providing intelligence for domestic “emergencies,” including an “insurrection.”

It’s no secret that the Pentagon has funded Big Data research into how social media surveillance can help predict large-scale population behaviours, specifically the outbreak of conflict, terrorism, and civil unrest.

Much of this research focuses on foreign theatres like the Middle East and North Africa — where the 2011 Arab Spring kicked off an arc of protest that swept across the region and toppled governments.

Since then, the Pentagon has spent millions of dollars finding patterns in posts across platforms like Facebook, Twitter, Instagram, Tumblr, and beyond to enable the prediction of major events.

But the Pentagon isn’t just interested in anticipating surprises abroad. The research also appears to be intended for use in the US homeland.

Datasets for the research were collected using the Apollo Social Sensing Tool, a real-time event tracking software that collects and analyses millions of social media posts.

The tool was originally developed under the Obama administration back in 2011 by the US Army Research Laboratory and US Defense Threat Reduction Agency, in partnership with Rensselaer Polytechnic Institute, the University of Illinois, IBM, and Caterva (a social marketing company that in 2013 was folded into a subsidiary of giant US government IT contractor, CSC). Past papers associated with the project show that the tool has been largely tested in foreign theatres like Haiti, Egypt, and Syria.

But the use of the Apollo tool to focus on protests in the US homeland has occurred under the Trump administration. The ‘election’ dataset compiled using Apollo for the 2018 US Army-funded study is comprised of 2.5 million tweets sent between October 26, 2016, and December 20, 2016, using the words “Trump”, “Clinton,” and “election.”

Tweets were geolocated to focus on “locations where protests occurred following the election” based on user profiles. Locations were then triangulated against protest data from “online news outlets across the country.”

The millions of tweets were used to make sense of the “frequencies of the protests in 39 cities” using 18 different ways of measuring the “size, structure and geography” of a network, along with two ways of measuring how that network leads a social group to become “mobilized,” or take action.

In short, this means that “the social network can be a predictor of mobilization, which in turn is a predictor of the protest.” This pivotal finding means that extensive real-time monitoring of American citizens’ social media activity can be used to predict future protests.

Most Americans say they can’t tell the difference between a social media bot and a human

A new study from Pew Research Center found that most Americans can’t tell social media bots from real humans, and most are convinced bots are bad. “Only 47 percent of Americans are somewhat confident they can identify social media bots from real humans,” reports The Verge. “In contrast, most Americans surveyed in a study about fake news were confident they could identify false stories.”

The Pew study is an uncommon look at what the average person thinks about these automated accounts that plague social media platforms. After surveying over 4,500 adults in the U.S., Pew found that most people actually don’t know much about bots. Two-thirds of Americans have at least heard of social media bots, but only 16 percent say they’ve heard a lot about them, while 34 percent say they’ve never heard of them at all. The knowledgeable tend to be younger, and men are more likely than women (by 22 percentage points) to say they’ve heard of bots. Since the survey results are self-reported, there’s a chance people are overstating or understating their knowledge of bots. Of those who have heard of bots, 80 percent say the accounts are used for bad purposes.

Regardless of whether a person is a Republican or Democrat or young or old, most think that bots are bad. And the more that a person knows about social media bots, the less supportive they are of bots being used for various purposes, like activists drawing attention to topics or a political party using bots to promote candidates.

Social Media Manipulation Rising Globally, New Oxford Report Warns

A new report from Oxford University found that manipulation of public opinion over social media platforms is growing at a large scale, despite efforts to combat it. “Around the world, government agencies and political parties are exploiting social media platforms to spread junk news and disinformation, exercise censorship and control, and undermine trust in media, public institutions and science.”

“The number of countries where formally organized social media manipulation occurs has greatly increased, from 28 to 48 countries globally,” says Samantha Bradshaw, co-author of the report. “The majority of growth comes from political parties who spread disinformation and junk news around election periods. There are more political parties learning from the strategies deployed during Brexit and the U.S. 2016 Presidential election: more campaigns are using bots, junk news, and disinformation to polarize and manipulate voters.”

This is despite efforts by governments in many democracies introducing new legislation designed to combat fake news on the internet. “The problem with this is that these ‘task forces’ to combat fake news are being used as a new tool to legitimize censorship in authoritarian regimes,” says Professor Phil Howard, co-author and lead researcher on the OII’s Computational Propaganda project. “At best, these types of task forces are creating counter-narratives and building tools for citizen awareness and fact-checking.” Another challenge is the evolution of the mediums individuals use to share news and information. “There is evidence that disinformation campaigns are moving on to chat applications and alternative platforms,” says Bradshaw. “This is becoming increasingly common in the Global South, where large public groups on chat applications are more popular.”

https://www.youtube.com/watch?v=--VqhKD3WxI

YouTube, YouTubers and You

How Silicon Valley divided society and made everyone raging mad

“Silicon Valley’s utopians genuinely but mistakenly believe that more information and connection makes us more analytical and informed. But when faced with quinzigabytes of data, the human tendency is to simplify things. Information overload forces us to rely on simple algorithms to make sense of the overwhelming noise. This is why, just like the advertising industry that increasingly drives it, the internet is fundamentally an emotional medium that plays to our base instinct to reduce problems and take sides, whether like or don’t like, my guy/not my guy, or simply good versus evil. It is no longer enough to disagree with someone, they must also be evil or stupid…

Nothing holds a tribe together like a dangerous enemy. That is the essence of identity politics gone bad: a universe of unbridgeable opinion between opposing tribes, whose differences are always highlighted, exaggerated, retweeted and shared. In the end, this leads us to ever more distinct and fragmented identities, all of us armed with solid data, righteous anger, a gutful of anger and a digital network of likeminded people. This is not total connectivity; it is total division.”

https://www.youtube.com/watch?v=69JXP4tnBMo

What Makes You Click (2016)

“The biggest psychological experiment ever is being conducted, and we’re all taking part in it: every day, a billion people are tested online. Which ingenious tricks and other digital laws ensure that we fill our online shopping carts to the brim, or stay on websites as long as possible? Or vote for a particular candidate?

The bankruptcies of department stores and shoe shops clearly show that our buying behaviour is rapidly shifting to the Internet. An entirely new field has arisen, of ‘user experience’ architects and ‘online persuasion officers’. How do these digital data dealers use, manipulate and abuse our user experience? Not just when it comes to buying things, but also with regards to our free time and political preferences.

Aren’t companies, which are running millions of tests at a time, miles ahead of science and government, in this respect? Now the creators of these digital seduction techniques, former Google employees among them, are themselves arguing for the introduction of an ethical code. What does it mean, when the conductors of experiments themselves are asking for their power and possibilities to be restricted?”

https://www.youtube.com/watch?v=pu_bI7DFp2M

Facebook: Cracking the Code (2017)

“What’s on your mind?” It’s the friendly Facebook question which lets you share how you’re feeling. It’s also the question that unlocks the details of your life and helps turn your thoughts into profits.

Facebook has the ability to track much of your browsing history, even when you’re not logged on, and even if you aren’t a member of the social network at all. This is one of the methods used to deliver targeted advertising and ‘news’ to your Facebook feed. This is why you are unlikely to see anything that challenges your world view.

This feedback loop is fuelling the rise and power of ‘fake news’. “We’re seeing news that’s tailored ever more tightly towards those kinds of things that people will click on, and will share, rather than things that perhaps are necessarily good for them”, says one Media Analyst.

This information grants huge power to those with access to it. Republican Party strategist Patrick Ruffini says, “What it does give us is much greater level of certainty and granularity and precision down to the individual voter, down to the individual precinct about how things are going to go”. Resultantly, former Facebook journalist, Adam Schrader thinks that there’s “a legitimate argument to this that Facebook influenced the election, the United States Election results.

Social Media are “driving Americans insane”

“If you pull out your phone to check Twitter while waiting for the light to change, or read e-mails while brushing your teeth, you might be what the American Psychological Association calls a “constant checker.” And chances are, it’s hurting your mental health.

Last week, the APA released a study finding that Americans were experiencing the first statistically significant stress increase in the survey’s 10-year history. In January, 57 percent of respondents of all political stripes said the U.S. political climate was a very or somewhat significant source of stress, up from 52 percent who said the same thing in August. On Thursday, the APA released the second part of its 1 findings, “Stress In America: Coping With Change,” examining the role technology and social media play in American stress levels.

Social media use has skyrocketed from 7 percent of American adults in 2005 to 65 percent in 2015. For those in the 18-29 age range, the increase is larger, from 12 percent to a remarkable 90 percent. But while an increase in social media usage is hardly surprising, the number of people who just can’t tear themselves away is stark: Nowadays, 43 percent of Americans say they are checking their e-mails, texts, or social media accounts constantly. And their stress levels are paying for it: On a 10-point scale, constant checkers reported an average stress level of 5.3. For the rest of Americans, the average level is a 4.4.

If the first step toward recovery, however, is admitting there is a problem, Americans are on their way. Some 65 percent of respondents said “unplugging” or taking a “digital detox” is important. But alas, knowing you have a problem is not the same as fixing it: Only 28 percent of those Americans say they take their own advice.

Stare Into The Lights My Pretties

“Your browsing history alone can give away your identity”

“Researchers at Stanford and Princeton universities have found a way to connect the dots between people’s private online activity and their Twitter accounts—even for people who have never tweeted.

When the team tested the technique on 400 real people who submitted their browsing history, they were able to correctly pick out the volunteers’ Twitter profiles nearly three-quarters of the time.

Here’s how the de-anonymization system works: The researchers figured that a person is more likely to click a link that was shared on social media by a friend—or a friend of a friend—than any other random link on the internet. (Their model controls for the baseline popularity of each website.) With that in mind, and the details of an anonymous person’s browser history in hand, the researchers can compute the probability that any one Twitter user created that browsing history. People’s basic tendency to follow links they come across on Twitter unmasks them—and it usually takes less than a minute.

“You can even be de-anonymized if you just browse and follow people, without actually sharing anything.”

U.S. government begins asking foreign travelers about their social media at border

“Foreign travelers arriving in the United States on the visa waiver program have been presented with an “optional” request to “enter information associated with your online presence,” a government official confirmed Thursday. The prompt includes a drop-down menu that lists platforms including Facebook, Google+, Instagram, LinkedIn and YouTube, as well as a space for users to input their account names on those sites. The new policy comes as Washington tries to improve its ability to spot and deny entry to individuals who have ties to terrorist groups like the Islamic State. But the government has faced a barrage of criticism since it first floated the idea last summer. The Internet Association, which represents companies including Facebook, Google and Twitter, at the time joined with consumer advocates to argue the draft policy threatened free expression and posed new privacy and security risks to foreigners. Now that it is final, those opponents are furious the Obama administration ignored their concerns. The question itself is included in what’s known as the Electronic System for Travel Authorization, a process that certain foreign travelers must complete to come to the United States. ESTA and a related paper form specifically apply to those arriving here through the visa-waiver program, which allows citizens of 38 countries to travel and stay in the United States for up to 90 days without a visa.”

“Information glut no problem for most Americans: survey”

“Most Americans do not see “information overload” as a problem for them despite the explosion of internet data and images, according to a Pew Research Center survey on Wednesday.

Only 20 percent of U.S. adults feel they get more information than they can handle, down from 27 percent a decade ago. Just over three-quarters like having so much information at hand, the survey of 1,520 people showed.

“Generally, Americans appreciate lots of information and access to it,” said the report into how U.S. adults cope with information demands.

Roughly four in five Americans agree that they are confident about using the internet to keep up with information demands, that a lot of information gives them a feeling of more control over their lives, and that they can easily determine what information is trustworthy.

Americans who are 65 or older, have a high school diploma or less and earn less than $30,000 a year are more likely to say they face a glut of information.

Eighty-four percent of Americans with online access through three sources – home broadband, smartphone and tablet computer – say they like having so much information available.

By contrast, 55 percent of those with no online source felt overwhelmed by the amount of possible information.

The term “information overload” was popularized by author Alvin Toffler in his 1970 bestseller “Future Shock.” It refers to difficulties that people face from getting too much information or data.

The Pew survey involved people over 18 interviewed by landline or cell phones from March 7 to April 4. The margin of error was 2.9 percentage points, meaning results could vary by that much either way.”

“Social Media” has destroyed discourse

Hossein Derakshan, an Iranian-Canadian author, media analyst, and performance artist writes in MIT Technology Review:

“Like TV, social media now increasingly entertains us, and even more so than television it amplifies our existing beliefs and habits. It makes us feel more than think, and it comforts more than challenges. The result is a deeply fragmented society, driven by emotions, and radicalized by lack of contact and challenge from outside. This is why Oxford Dictionaries designated “post-truth” as the word of 2016: an adjective “relating to circumstances in which objective facts are less influential in shaping public opinion than emotional appeals.”

[…]

Traditional television still entails some degree of surprise. What you see on television news is still picked by human curators, and even though it must be entertaining to qualify as worthy of expensive production, it is still likely to challenge some of our opinions (emotions, that is).

Social media, in contrast, uses algorithms to encourage comfort and complaisance, since its entire business model is built upon maximizing the time users spend inside of it. Who would like to hang around in a place where everyone seems to be negative, mean, and disapproving? The outcome is a proliferation of emotions, a radicalization of those emotions, and a fragmented society. This is way more dangerous for the idea of democracy founded on the notion of informed participation.

This means we should write and read more, link more often, and watch less television and fewer videos — and spend less time on Facebook, Instagram, and YouTube.

Our habits and our emotions are killing us and our planet. Let’s resist their lethal appeal.”

Social media and the anti-fact age

Adam Turner at The Age writes:

“When you look at how social media works, it was inevitable that it would turn into one of the world’s most powerful propaganda tools. It’s often painted as a force for good, letting people bypass the traditional gatekeepers in order to quickly disseminate information, but there’s no guarantee that this information is actually true.

Facebook has usurped the role of the mainstream media in disseminating news, but hasn’t taken on the fourth estate’s corresponding responsibility for keeping the bastards honest. The mainstream media has no-one to blame but itself, having engaged in a tabloid race to the bottom which devalued truth to the point that blatant liars are considered more honest.

The fragmentation of news is already creating a filter bubble in that most people don’t tend to read the newspaper from front to back, or sit through entire news bulletins, they just pick and choose what interests them. The trouble with Facebook is that it also reinforces bias, the more extreme your political views the less likely you are to see anything with an opposing viewpoint which might help you develop a more well-rounded view of the world.”

Brooke Binkowski, the managing editor of the fact-checking at Snopes.com says, “Honestly, most of the fake news is incredibly easy to debunk because it’s such obvious bullshit…”

The problem, Binkowski believes, is that the public has lost faith in the media broadly — therefore no media outlet is considered credible any longer. The reasons are familiar: as the business of news has grown tougher, many outlets have been stripped of the resources they need for journalists to do their jobs correctly. “When you’re on your fifth story of the day and there’s no editor because the editor’s been fired and there’s no fact checker so you have to Google it yourself and you don’t have access to any academic journals or anything like that, you will screw stories up,” she says.”

UPDATE 1/12/2016 — Most students can’t spot fake news

“If you thought fake online news was a problem for impressionable adults, it’s even worse for the younger crowd. A Stanford study of 7,804 middle school, high school and college students has found that most of them couldn’t identify fake news on their own. Their susceptibility varied with age, but even a large number of the older students fell prey to bogus reports. Over two thirds of middle school kids didn’t see why they shouldn’t trust a bank executive’s post claiming that young adults need financial help, while nearly 40 percent of high schoolers didn’t question the link between an unsourced photo and the claims attached to it.

Why did many of the students misjudge the authenticity of a story? They were fixated on the appearance of legitimacy, rather than the quality of information. A large photo or a lot of detail was enough to make a Twitter post seem credible, even if the actual content was incomplete or wrong. There are plenty of adults who respond this way, we’d add, but students are more vulnerable than most.

As the Wall Street Journal explains, part of the solution is simply better education: teach students to verify sources, question motivations and otherwise think critically.”

(Emphasis added)

Stare Into The Lights My Pretties

CIA-backed surveillance software marketed to public schools

“Conrey said the district simply wanted to keep its students safe. “It was really just about student safety; if we could try to head off any potential dangerous situations, we thought it might be worth it,” he said.

“An online surveillance tool that enabled hundreds of U.S. law enforcement agencies to track and collect information on social media users was also marketed for use in American public schools, the Daily Dot has learned.

Geofeedia sold surveillance software typically bought by police to a high school in a northern Chicago suburb, less than 50 miles from where the company was founded in 2011. An Illinois school official confirmed the purchase of the software by phone on Monday.

Ultimately, the school found little use for the platform, which was operated by police liaison stationed on school grounds, and chose not to renew its subscription after the first year, citing cost and a lack of actionable information. “A lot of kids that were posting stuff that we most wanted, they weren’t doing the geo-tagging or making it public,” Conrey said. “We weren’t really seeing a lot there.”

An alarming number of people rely *solely* on a Social Media network for news

Note the stats from Pew Research Center for Journalism and Media, that 64% of users surveyed rely on just one source alone of social media for news content—i.e. Facebook, Twitter, YouTube, etc, while 26% would check only two sources, and 10% three or more: A staggeringly concerning trend, given the rampant personalisation of these screen environments and what we know about the functioning and reinforcement of The Filter Bubble. This is a centralisation of power and lack of diversity and compare/contrast that the “old media” perhaps could only dream of…

From The Huffington Post:

“It’s easy to believe you’re getting diverse perspectives when you see stories on Facebook. You’re connected not just to many of your friends, but also to friends of friends, interesting celebrities and publications you “like.”

But Facebook shows you what it thinks you’ll be interested in. The social network pays attention to what you interact with, what your friends share and comment on, and overall reactions to a piece of content, lumping all of these factors into an algorithm that serves you items you’re likely to engage with. It’s a simple matter of business: Facebook wants you coming back, so it wants to show you things you’ll enjoy.”

BBC also reported earlier this year that Social Media networks outstripped television as the news source for young people (emphasis added):

“Of the 18-to-24-year-olds surveyed, 28% cited social media as their main news source, compared with 24% for TV.

The Reuters Institute for the Study of Journalism research also suggests 51% of people with online access use social media as a news source. Facebook and other social media outlets have moved beyond being “places of news discovery” to become the place people consume their news, it suggests.

The study found Facebook was the most common source—used by 44% of all those surveyed—to watch, share and comment on news. Next came YouTube on 19%, with Twitter on 10%. Apple News accounted for 4% in the US and 3% in the UK, while messaging app Snapchat was used by just 1% or less in most countries.

According to the survey, consumers are happy to have their news selected by algorithms, with 36% saying they would like news chosen based on what they had read before and 22% happy for their news agenda to be based on what their friends had read. But 30% still wanted the human oversight of editors and other journalists in picking the news agenda and many had fears about algorithms creating news “bubbles” where people only see news from like-minded viewpoints.

Most of those surveyed said they used a smartphone to access news, with the highest levels in Sweden (69%), Korea (66%) and Switzerland (61%), and they were more likely to use social media rather than going directly to a news website or app.

The report also suggests users are noticing the original news brand behind social media content less than half of the time, something that is likely to worry traditional media outlets.”

And to exemplify the issue, these words from Slashdot: “Over the past few months, we have seen how Facebook’s Trending Topics feature is often biased, and moreover, how sometimes fake news slips through its filter.”

“The Washington Post monitored the website for over three weeks and found that Facebook is still struggling to get its algorithm right. In the six weeks since Facebook revamped its Trending system, the site has repeatedly promoted “news” stories that are actually works of fiction. As part of a larger audit of Facebook’s Trending topics, the Intersect logged every news story that trended across four accounts during the workdays from Aug. 31 to Sept. 22. During that time, we uncovered five trending stories that were indisputably fake and three that were profoundly inaccurate. On top of that, we found that news releases, blog posts from sites such as Medium and links to online stores such as iTunes regularly trended.”

UPDATE 9/11/16 — US President Barack Obama criticises Facebook for spreading fake stories: “The way campaigns have unfolded, we just start accepting crazy stuff as normal,” Obama said. “As long as it’s on Facebook, and people can see it, as long as its on social media, people start believing it, and it creates this dust cloud of nonsense.”

Stare Into The Lights My Pretties

Snapchat launches video-recording sunglasses

“Social media app Snapchat is introducing video-recording sunglasses called Spectacles and is changing its company name to incorporate the new product.

The glasses can record video 10 seconds at a time by tapping a button on the device. The video is then uploaded automatically to the popular image-messaging app via Bluetooth or Wi-Fi. The glasses are the first hardware from the Los Angeles-based company.”

Stare Into The Lights My Pretties

YouTube as a parody of itself?

It never ceases to amaze me just how stupid screen culture is.

But now it’s even parodying itself—in the way only the online spectacle can: by folding back into itself to keep us watching.

The problems and concerns, long since established, are all now just a big joke. Short attention spans. Superficial engagement with information. Advertising masquerading as content. The convergence of extremely powerful corporate empires that influence what we think, feel, and do, in a way never before possible. Distraction from the real world, while the real world burns.

The story of this first short is about the end of the world, and nobody even cares.  Could that be any more close to home?

There’s also a short about an “Uber for people,” invoking the themes of exploitation, surveillance, and the enslavement-addiction to technological solutions that parodies the screen culture of today—especially the mindset of “apps fix all.”

Can we see this as one thing in terms of another?

Likewise with, “Enter the Hive Mind.”

What will you do, when it’s time you’re asked to put your whole self into the global computer even more completely than now? What is your personal threshold? Will you continue to “breathe life” into the machine?