Resources

Facebook Tests Prompts That Ask Users If They’re Worried a Friend is ‘Becoming an Extremist’

Some Facebook users in the United States are being served a prompt that asks if they are worried that someone they know might be becoming an extremist. Others are being notified that they may have been exposed to extremist content.

It is all part of a test the social media company is running that stems from its Redirect Initiative, which aims to combat violent extremism, Andy Stone, a Facebook spokesperson, told CNN. Screen shots of the alerts surfaced on social media Thursday. “This test is part of our larger work to assess ways to provide resources and support to people on Facebook who may have engaged with or were exposed to extremist content, or may know someone who is at risk,” Stone said. “We are partnering with NGOs and academic experts in this space and hope to have more to share in the future,” Stone added. One of the alerts, a screen grab of which made the rounds on social media Thursday, asks users, “Are you concerned that someone you know is becoming an extremist?” “We care about preventing extremism on Facebook,” explained that alert, according to a screen grab posted on social media. “Others in your situation have received confidential support.”

452

Fired by Bot at Amazon: ‘It’s You Against the Machine’

Contract drivers say algorithms terminate them by email—even when they have done nothing wrong.

Stephen Normandin spent almost four years racing around Phoenix delivering packages as a contract driver for Amazon.com Inc. Then one day, he received an automated email. The algorithms tracking him had decided he wasn’t doing his job properly.

The 63-year-old Army veteran was stunned. He’d been fired by a machine.

Normandin says Amazon punished him for things beyond his control that prevented him from completing his deliveries, such as locked apartment complexes. Amazon assigned him some pre-dawn deliveries at apartment complexes when their gates were still locked, a common complaint among Flex drivers. The algorithm instructs drivers in such instances to deliver packages to the main office, but that wasn’t open either. Normandin called the customer as instructed—a long shot because most people don’t answer calls from unfamiliar numbers, especially early morning. He called driver support, which couldn’t get through to the customer either. Meanwhile, the clock was ticking, and the algorithm was taking note.

When Ryan Cope was deactivated in 2019, he didn’t bother arguing or consider paying for arbitration. By then, Cope had already decided there was no way he could meet the algorithms’ demands. Driving miles along winding dirt roads outside Denver in the snow, he often shook his head in disbelief that Amazon expected the customer to get the package within two hours.

When drivers do challenge poor ratings, they can’t tell if they’re communicating with real people. Responses often include just a first name or no name at all, and the replies typically apply to a variety of situations rather than a specific problem. Even if a name is attached, a machine most likely generated the first few email responses, according to people familiar with the matter.

When human managers get involved, they typically conduct a hasty review—if they do one at all—because they must meet their own performance standards. A former employee at a driver support call center said dozens of part-time seasonal workers with little training were assigned to oversee issues for millions of drivers.

443

Why Teens Are Falling for TikTok Conspiracy Theories

TikTok–the platform skews young—reportedly one-third of its daily users in the US are 14 or younger—and celebrity gossip has long been the lingua franca of social media for people of all ages. Right-wing conspiracy groups like QAnon have been spreading made up stories about those in power on networks like Facebook for years. Now those ideas have jumped to TikTok where they’re being metabolized by much younger consumers. Those things all scan. What doesn’t, however, is why teens believe them.

The short answer? TikTok is full of crazy ideas—conspiracies are no different. They’ve been normalized by the platform where many young people spend most of their time. “Many of these conspiracy sites and stories are entertaining. They are social gathering spots. They are exciting,” says Nancy Rosenblum, Senator Joseph S. Clark Professor of Ethics in Politics and Government at Harvard University and co-author of A Lot of People Are Saying: The New Conspiracism and the Assault on Democracy. “It’s small wonder that teenagers who ‘live on the screen’ would be drawn to the drama.”

Easy access to social media’s redistribution tools worsens this problem. With every like, share, send, and retweet, teenagers are popularizing this content worldwide. “On social media, repetition substitutes for validation,” says Russel Muirhead, a professor of democracy and politics at Dartmouth College and Rosenblum’s co-author. “Repetition is what breathes air into conspiracy theories, and social media is all about repetition. But repeating something that’s false does not make it more true! Teenagers are just as vulnerable to this as grown ups.”

This wouldn’t be such a problem if teenagers weren’t so attached to social media. So fond, in fact, that some 54 percent of teens get the bulk of their news from it. If this sounds concerning, that’s because it is. With teenagers relying on TikTok as their sole source of information, it makes sense for my generation to become absorbed in webs of falsities and to live as largely uninformed citizens.

610

Zuckerberg Acknowledges ‘Risk of Civil Unrest’ After US Elections, Promises Newsfeed Updates, too little too late

Facebook founder and CEO Mark Zuckerberg told analysts on a conference call Thursday evening that the company plans to post notices at the top of users’ news feeds on November 3rd discrediting any claims by either candidate in the U.S. presidential election that they have won the election if the site deems the claim premature… The move, said Zuckerberg, is being made because “There is a risk of civil unrest across the country, and given this, companies like ours need to go well beyond what we’ve done before.”

The conference call with analysts followed a third-quarter earnings report Thursday afternoon in which Facebook’s results topped expectations, helped by gains in active users that also were higher than Wall Street expected.

Zuckerberg said Facebook “helped 4.4 million people register [to vote] exceeding the goal that we set for ourselves this summer.”

512

From Climate Change to the Dangers of Smoking: How Powerful Interests Made Us Doubt Everything

BBC News reports:
In 1991, the trade body that represents electrical companies in the U.S., the Edison Electric Institute, created a campaign called the Information Council for the Environment which aimed to “Reposition global warming as theory (not fact)”. Some details of the campaign were leaked to the New York Times. “They ran advertising campaigns designed to undermine public support, cherry picking the data to say, ‘Well if the world is warming up, why is Kentucky getting colder?’ They asked rhetorical questions designed to create confusion, to create doubt,” argued Naomi Oreskes, professor of the history of science at Harvard University and co-author of Merchants of Doubt. But back in the 1990 there were many campaigns like this…

Most of the organisations opposing or denying climate change science were right-wing think tanks, who tended to be passionately anti-regulation. These groups made convenient allies for the oil industry, as they would argue against action on climate change on ideological grounds. Jerry Taylor spent 23 years with the Cato Institute — one of those right wing think tanks — latterly as vice president. Before he left in 2014, he would regularly appear on TV and radio, insisting that the science of climate change was uncertain and there was no need to act.

Now, he realises his arguments were based on a misinterpretation of the science, and he regrets the impact he’s had on the debate.

Harvard historian Naomi Oreskes discovered leading climate-change skeptics had also been prominent skeptics on the dangers of cigarette smoking. “That was a Eureka moment,” Oreskes tells BBC News. “We realised this was not a scientific debate.”

Decades before the energy industry tried to undermine the case for climate change, tobacco companies had used the same techniques to challenge the emerging links between smoking and lung cancer in the 1950s… As a later document by tobacco company Brown and Williamson summarised the approach: “Doubt is our product, since it is the best means of competing with the ‘body of fact’ that exists in the minds of the general public.” Naomi Oreskes says this understanding of the power of doubt is vital. “They realise they can’t win this battle by making a false claim that sooner or later would be exposed. But if they can create doubt, that would be sufficient — because if people are confused about the issue, there’s a good chance they’ll just keep smoking….”

Academics like David Michaels, author of The Triumph of Doubt, fear the use of uncertainty in the past to confuse the public and undermine science has contributed to a dangerous erosion of trust in facts and experts across the globe today, far beyond climate science or the dangers of tobacco. He cites public attitudes to modern issues like the safety of 5G, vaccinations — and coronavirus.

“By cynically manipulating and distorting scientific evidence, the manufacturers of doubt have seeded in much of the public a cynicism about science, making it far more difficult to convince people that science provides useful — in some cases, vitally important — information.

587

As Coronavirus Surveillance Escalates, Personal Privacy Plummets

Tracking entire populations to combat the pandemic now could open the doors to more invasive forms of government snooping later.

In South Korea, government agencies are harnessing surveillance-camera footage, smartphone location data and credit card purchase records to help trace the recent movements of coronavirus patients and establish virus transmission chains. In Lombardy, Italy, the authorities are analyzing location data transmitted by citizens’ mobile phones to determine how many people are obeying a government lockdown order and the typical distances they move every day. About 40 percent are moving around “too much,” an official recently said. In Israel, the country’s internal security agency is poised to start using a cache of mobile phone location data — originally intended for counterterrorism operations — to try to pinpoint citizens who may have been exposed to the virus.

As countries around the world race to contain the pandemic, many are deploying digital surveillance tools as a means to exert social control, even turning security agency technologies on their own civilians. Health and law enforcement authorities are understandably eager to employ every tool at their disposal to try to hinder the virus — even as the surveillance efforts threaten to alter the precarious balance between public safety and personal privacy on a global scale. Yet ratcheting up surveillance to combat the pandemic now could permanently open the doors to more invasive forms of snooping later. It is a lesson Americans learned after the terrorist attacks of Sept. 11, 2001, civil liberties experts say. Nearly two decades later, law enforcement agencies have access to higher-powered surveillance systems, like fine-grained location tracking and facial recognition — technologies that may be repurposed to further political agendas like anti-immigration policies. Civil liberties experts warn that the public has little recourse to challenge these digital exercises of state power.

655

In Coronavirus Fight, China Gives Citizens a Color Code, With Red Flags

A new system uses software to dictate quarantines — and appears to send personal data to police, in a troubling precedent for automated social control. As China encourages people to return to work despite the coronavirus outbreak, it has begun a bold mass experiment in using data to regulate citizens’ lives — by requiring them to use software on their smartphones that dictates whether they should be quarantined or allowed into subways, malls and other public spaces. But a New York Times analysis of the software’s code found that the system does more than decide in real time whether someone poses a contagion risk. It also appears to share information with the police, setting a template for new forms of automated social control that could persist long after the epidemic subsides.

The Alipay Health Code, as China’s official news media has called the system, was first introduced in the eastern city of Hangzhou — a project by the local government with the help of Ant Financial, a sister company of the e-commerce giant Alibaba. People in China sign up through Ant’s popular wallet app, Alipay, and are assigned a color code — green, yellow or red — that indicates their health status. The system is already in use in 200 cities and is being rolled out nationwide, Ant says. Neither the company nor Chinese officials have explained in detail how the system classifies people. That has caused fear and bewilderment among those who are ordered to isolate themselves and have no idea why.

659

Chinese Newspaper Touts Videogame Where Players ‘Hunt Down Traitors’ in Hong Kong

An online game calling on players to hunt down traitors who seek to separate Hong Kong from China and fuel street violence has reportedly begun to attract players across Chinese mainland social media platforms. The game, “fight the traitors together,” is set against the backdrop of the social unrest that has persisted in Hong Kong. The script asks the player to find eight secessionists hidden in the crowd participating in Hong Kong protests.

Players can knock them down with slaps or rotten eggs until they are captured. Online gamers claim the game allows them to vent their anger at the separatist behavior of secessionists during the recent Hong Kong riots. The eight traitors in the game, caricatured based on real people, include Jimmy Lai Chee-ying, Martin Lee Chu-ming and Joshua Wong Chi-fung, prominent opposition figures who have played a major role in inciting unrest in Hong Kong. There are also traitor figures in ancient China…

In the game, amid a crowd of black-clad rioters wearing yellow hats and face masks, Anson Chan Fang On-sang, another leading opposition figure, carries a bag with a U.S. flag, clutches a stack of U.S. dollars and holds a loudspeaker to incite violence in the streets.

661

Digital Authoritarianism Is On the Rise Around the World, Report Warns

Internet freedom declined for a ninth consecutive year as governments around the world used social media to monitor citizens and manipulate elections, according to a new study that warned of creeping “digital authoritarianism.” Thirty-three of the 65 countries surveyed were found to have experienced worsening internet freedom since June 2018, compared with 16 that were found to have improving conditions. The study, conducted by Freedom House, a nonprofit human rights advocacy, said domestic disinformation had grown as a threat to democracy with populist leaders and their online supporters using the internet to distort political discussions. The organization found domestic interference in 26 of the 30 countries that held elections over the past year.

The report said internet freedom in the U.S. had declined, in large part because law enforcement and immigration agencies used social media to monitor people, though the country was still deemed “free.” China was dubbed the “worst abuser of internet freedom” for a fourth consecutive year as the government tightened information controls because of the 30th anniversary of the Tiananmen Square massacre and protests in Hong Kong. Noting that the biggest platforms were American, Freedom House called on the U.S. to lead in the effort to fix social media transparency and accountability. “This is the only way to stop the internet from becoming a Trojan horse for tyranny and oppression,” wrote Adrian Shahbaz, one of the authors of the report.

708

Spain and GitHub Are Blocking an App That Helped Protesters Organize

For the last month, hundreds of thousands of people have joined demonstrations in Spain to voice their objection to the jailing of Catalan separatist leaders and support Catalonian independence. As with almost all modern activist and public protest movements, activists are using social media and apps to communicate with and organize public actions. But this week, in a move that puts the Spanish government on par with censorship-heavy places like China and Russia, the country requested that Github block access to one of those apps, by revoking local access to its Github repository. Github, which is owned by Microsoft, complied with the order.

According to Spanish news outlet El Confidencial, last week the government ordered takedowns of websites and app made by Tsunami Democratic, an activist group organizing protests in the region. To try to keep access to the app download alive, Tsunami Democratic moved the .apk file to Github. But the government shut that down, too, blocking the site in Spain. Motherboard tested the download using a VPN, and the Github repo was blocked from Madrid.

670

Can Google Ads Change Your Beliefs?

There are advertisers in the digital marketing industry who want to find out how effective this new form of social engineering is. One of those advertisers is Google. Redirect Method was a Google-incubated project that used redirect ads to deradicalize would-be extremists. In the first eight weeks of 2016, some 320,000 people — all of whom were believed to harbor sympathy toward the Islamic State — clicked on ads designed to reflect an interest in extremist content. Instead of arriving at a page that supported their views, Islamic State sympathizers who clicked the ads found themselves directed to a playlist of videos debunking the terror group’s recruitment narratives. Most of the visitors stuck around. Together, they watched more than half a million minutes of video.

After the ISIS campaign ended, Google left behind a blueprint.

The blueprint shows, step by step, how you can create your own redirect ads to sway any belief or opinion — held by any Google user, anywhere in the world — of your choice. You don’t have to be a marketer with years of experience to do this. You just need to follow the instructions and put up a credit card (a few hundred bucks will suffice). Recently, I followed the blueprint and created a redirect campaign of my own… The goal of my first redirect campaign was to sway the ideology of suicidal people.

Nearly one in three apparently suicidal searchers who viewed his ad then dialed his hotline number — which then forwarded the calls to the National Suicide Prevention Lifeline. But he expressed surprise that Google “let me run the ads with no issue… I didn’t need qualifications to be a conduit of peoples’ fates.” He later tried creating another campaign for prospective school shooters, “but the conversion rates were low. These prospective shooters were reluctant to speak with someone.”

Yet one study found that more than 50% of people using Google couldn’t distinguish between ads and “organic” results on Google.

715

Do Google and Facebook Threaten Our ‘Ambient Privacy’?

Until recently, ambient privacy was a simple fact of life. Recording something for posterity required making special arrangements, and most of our shared experience of the past was filtered through the attenuating haze of human memory. Even police states like East Germany, where one in seven citizens was an informer, were not able to keep tabs on their entire population. Today computers have given us that power. Authoritarian states like China and Saudi Arabia are using this newfound capacity as a tool of social control. Here in the United States, we’re using it to show ads. But the infrastructure of total surveillance is everywhere the same, and everywhere being deployed at scale….

Because our laws frame privacy as an individual right, we don’t have a mechanism for deciding whether we want to live in a surveillance society. Congress has remained silent on the matter, with both parties content to watch Silicon Valley make up its own rules. The large tech companies point to our willing use of their services as proof that people don’t really care about their privacy. But this is like arguing that inmates are happy to be in jail because they use the prison library. Confronted with the reality of a monitored world, people make the rational decision to make the best of it.

That is not consent…

Our discourse around privacy needs to expand to address foundational questions about the role of automation: To what extent is living in a surveillance-saturated world compatible with pluralism and democracy? What are the consequences of raising a generation of children whose every action feeds into a corporate database? What does it mean to be manipulated from an early age by machine learning algorithms that adaptively learn to shape our behavior? That is not the conversation Facebook or Google want us to have. Their totalizing vision is of a world with no ambient privacy and strong data protections, dominated by the few companies that can manage to hoard information at a planetary scale. They correctly see the new round of privacy laws as a weapon to deploy against smaller rivals, further consolidating their control over the algorithmic panopticon.

751

As Facebook Raised a Privacy Wall, It Carved an Opening for Tech Giants

Internal documents show that the social network gave Microsoft, Amazon, Spotify and others far greater access to people’s data than it has disclosed.

For years, Facebook gave some of the world’s largest technology companies more intrusive access to users’ personal data than it has disclosed, effectively exempting those business partners from its usual privacy rules, according to internal records and interviews.

The special arrangements are detailed in hundreds of pages of Facebook documents obtained by The New York Times. The records, generated in 2017 by the company’s internal system for tracking partnerships, provide the most complete picture yet of the social network’s data-sharing practices. They also underscore how personal data has become the most prized commodity of the digital age, traded on a vast scale by some of the most powerful companies in Silicon Valley and beyond.

Facebook allowed Microsoft’s Bing search engine to see the names of virtually all Facebook users’ friends without consent, the records show, and gave Netflix and Spotify the ability to read Facebook users’ private messages.

The social network permitted Amazon to obtain users’ names and contact information through their friends, and it let Yahoo view streams of friends’ posts as recently as this summer, despite public statements that it had stopped that type of sharing years earlier.

Facebook has been reeling from a series of privacy scandals, set off by revelations in March that a political consulting firm, Cambridge Analytica, improperly used Facebook data to build tools that aided President Trump’s 2016 campaign. Acknowledging that it had breached users’ trust, Facebook insisted that it had instituted stricter privacy protections long ago. Mark Zuckerberg, the chief executive, assured lawmakers in April that people “have complete control” over everything they share on Facebook.

[Facebook’s strategy in times of crisis: delay, deny and deflect.]

Facebook began forming data partnerships when it was still a relatively young company. Mr. Zuckerberg was determined to weave Facebook’s services into other sites and platforms, believing it would stave off obsolescence and insulate Facebook from competition. Every corporate partner that integrated Facebook data into its online products helped drive the platform’s expansion, bringing in new users, spurring them to spend more time on Facebook and driving up advertising revenue. At the same time, Facebook got critical data back from its partners.

The partnerships were so important that decisions about forming them were vetted at high levels, sometimes by Mr. Zuckerberg and Sheryl Sandberg, the chief operating officer, Facebook officials said. While many of the partnerships were announced publicly, the details of the sharing arrangements typically were confidential.

Facebook also allowed Spotify, Netflix and the Royal Bank of Canada to read, write and delete users’ private messages, and to see all participants on a thread — privileges that appeared to go beyond what the companies needed to integrate Facebook into their systems, the records show. Facebook acknowledged that it did not consider any of those three companies to be service providers. Spokespeople for Spotify and Netflix said those companies were unaware of the broad powers Facebook had granted them. A spokesman for Netflix said Wednesday that it had used the access only to enable customers to recommend TV shows and movies to their friends.

A Royal Bank of Canada spokesman disputed that the bank had had any such access. (Aspects of some sharing partnerships, including those with the Royal Bank of Canada and Bing, were first reported by The Wall Street Journal.)

Spotify, which could view messages of more than 70 million users a month, still offers the option to share music through Facebook Messenger. But Netflix and the Canadian bank no longer needed access to messages because they had deactivated features that incorporated it.

These were not the only companies that had special access longer than they needed it. Yahoo, The Times and others could still get Facebook users’ personal information in 2017.

Yahoo could view real-time feeds of friends’ posts for a feature that the company had ended in 2012. A Yahoo spokesman declined to discuss the partnership in detail but said the company did not use the information for advertising. The Times — one of nine media companies named in the documents — had access to users’ friend lists for an article-sharing application it had discontinued in 2011. A spokeswoman for the news organization said it was not obtaining any data.

Facebook’s internal records also revealed more about the extent of sharing deals with over 60 makers of smartphones, tablets and other devices, agreements first reported by The Times in June.

Facebook empowered Apple to hide from Facebook users all indicators that its devices were asking for data. Apple devices also had access to the contact numbers and calendar entries of people who had changed their account settings to disable all sharing, the records show.

Apple officials said they were not aware that Facebook had granted its devices any special access. They added that any shared data remained on the devices and was not available to anyone other than the users.

1113

AI Mistakes Ad On a Bus For an Actual CEO, Then Publicly Shames Them For ‘Jaywalking’

Since last year, many Chinese cities have cracked down on jaywalking by investing in facial recognition systems and AI-powered surveillance cameras. Jaywalkers are identified and shamed by displaying their photographs on large public screens… Developments are also underway to engage the country’s mobile network operators and social media platforms, such as Tencent Holdings’ WeChat and Sina Weibo, to establish a system in which offenders will receive personal text messages as soon as they are caught violating traffic rules….

Making a compelling case for change is the recent experience of Dong Mingzhu, chairwoman of China’s biggest maker of air conditioners Gree Electric Appliances, who found her face splashed on a huge screen erected along a street in the port city of Ningbo… That artificial intelligence-backed surveillance system, however, erred in capturing Dong’s image on Wednesday from an advertisement on the side of a moving bus. The traffic police in Ningbo, a city in the eastern coastal province of Zhejiang, were quick to recognise the mistake, writing in a post on microblog Sina Weibo on Wednesday that it had deleted the snapshot. It also said the surveillance system would be completely upgraded to cut incidents of false recognition in future.

750

When Tech Knows You Better Than You Know Yourself

Algorithms are kind of running where 2 billion people spend their time. Seventy percent of what people watch on YouTube is driven by recommendations from the algorithm. People think that what you’re watching on YouTube is a choice. People are sitting there, they sit there, they think, and then they choose. But that’s not true. Seventy percent of what people are watching is the recommended videos on the right hand side, which means 70 percent of 1.9 billion users, that’s more than the number of followers of Islam, about the number followers of Christianity, of what they’re looking at on YouTube for 60 minutes a day—that’s the average time people spend on YouTube. So you got 60 minutes, and 70 percent is populated by a computer. The machine is out of control.

715

Universal Basic Income, Silicon Valley’s push for our further enslavement

Douglas Rushkoff, long-time open source advocate (and currently a professor of Digital Economics at the City University of New York, Queens College), is calling Universal Basic Incomes “no gift to the masses, but a tool for our further enslavement.”

Uber’s business plan, like that of so many other digital unicorns, is based on extracting all the value from the markets it enters. This ultimately means squeezing employees, customers, and suppliers alike in the name of continued growth. When people eventually become too poor to continue working as drivers or paying for rides, UBI supplies the required cash infusion for the business to keep operating. When it’s looked at the way a software developer would, it’s clear that UBI is really little more than a patch to a program that’s fundamentally flawed. The real purpose of digital capitalism is to extract value from the economy and deliver it to those at the top. If consumers find a way to retain some of that value for themselves, the thinking goes, you’re doing something wrong or “leaving money on the table.”

Walmart perfected the softer version of this model in the 20th century. Move into a town, undercut the local merchants by selling items below cost, and put everyone else out of business. Then, as sole retailer and sole employer, set the prices and wages you want. So what if your workers have to go on welfare and food stamps. Now, digital companies are accomplishing the same thing, only faster and more completely…. Soon, consumers simply can’t consume enough to keep the revenues flowing in. Even the prospect of stockpiling everyone’s data, like Facebook or Google do, begins to lose its allure if none of the people behind the data have any money to spend. To the rescue comes UBI.

The policy was once thought of as a way of taking extreme poverty off the table. In this new incarnation, however, it merely serves as a way to keep the wealthiest people (and their loyal vassals, the software developers) entrenched at the very top of the economic operating system. Because of course, the cash doled out to citizens by the government will inevitably flow to them… Under the guise of compassion, UBI really just turns us from stakeholders or even citizens to mere consumers. Once the ability to create or exchange value is stripped from us, all we can do with every consumptive act is deliver more power to people who can finally, without any exaggeration, be called our corporate overlords… if Silicon Valley’s UBI fans really wanted to repair the economic operating system, they should be looking not to universal basic income but universal basic assets, first proposed by Institute for the Future’s Marina Gorbis… As appealing as it may sound, UBI is nothing more than a way for corporations to increase their power over us, all under the pretense of putting us on the payroll. It’s the candy that a creep offers a kid to get into the car or the raise a sleazy employer gives a staff member who they’ve sexually harassed. It’s hush money.

Rushkoff’s conclusion? “Whether its proponents are cynical or simply naive, UBI is not the patch we need.”

767

As Google Maps Renames Neighbourhoods, Residents Fume

For decades, the district south of downtown and alongside San Francisco Bay here was known as either Rincon Hill, South Beach or South of Market. This spring, it was suddenly rebranded on Google Maps to a name few had heard: the East Cut. The peculiar moniker immediately spread digitally, from hotel sites to dating apps to Uber, which all use Google’s map data. The name soon spilled over into the physical world, too. Real-estate listings beckoned prospective tenants to the East Cut. And news organizations referred to the vicinity by that term.

“It’s degrading to the reputation of our area,” said Tad Bogdan, who has lived in the neighborhood for 14 years. In a survey of 271 neighbors that he organized recently, he said, 90 percent disliked the name. The swift rebranding of the roughly 170-year-old district is just one example of how Google Maps has now become the primary arbiter of place names. With decisions made by a few Google cartographers, the identity of a city, town or neighborhood can be reshaped, illustrating the outsize influence that Silicon Valley increasingly has in the real world.

755

Social Media Manipulation Rising Globally, New Oxford Report Warns

A new report from Oxford University found that manipulation of public opinion over social media platforms is growing at a large scale, despite efforts to combat it. “Around the world, government agencies and political parties are exploiting social media platforms to spread junk news and disinformation, exercise censorship and control, and undermine trust in media, public institutions and science.”

“The number of countries where formally organized social media manipulation occurs has greatly increased, from 28 to 48 countries globally,” says Samantha Bradshaw, co-author of the report. “The majority of growth comes from political parties who spread disinformation and junk news around election periods. There are more political parties learning from the strategies deployed during Brexit and the U.S. 2016 Presidential election: more campaigns are using bots, junk news, and disinformation to polarize and manipulate voters.”

This is despite efforts by governments in many democracies introducing new legislation designed to combat fake news on the internet. “The problem with this is that these ‘task forces’ to combat fake news are being used as a new tool to legitimize censorship in authoritarian regimes,” says Professor Phil Howard, co-author and lead researcher on the OII’s Computational Propaganda project. “At best, these types of task forces are creating counter-narratives and building tools for citizen awareness and fact-checking.” Another challenge is the evolution of the mediums individuals use to share news and information. “There is evidence that disinformation campaigns are moving on to chat applications and alternative platforms,” says Bradshaw. “This is becoming increasingly common in the Global South, where large public groups on chat applications are more popular.”

810

How the “Math Men” Overthrew the “Mad Men”

Once, Mad Men ruled advertising. They’ve now been eclipsed by Math Men — the engineers and data scientists whose province is machines, algorithms, pureed data, and artificial intelligence. Yet Math Men are beleaguered, as Mark Zuckerberg demonstrated when he humbled himself before Congress, in April. Math Men’s adoration of data — coupled with their truculence and an arrogant conviction that their ‘science’ is nearly flawless — has aroused government anger, much as Microsoft did two decades ago.

The power of Math Men is awesome. Google and Facebook each has a market value exceeding the combined value of the six largest advertising and marketing holding companies. Together, they claim six out of every ten dollars spent on digital advertising, and nine out of ten new digital ad dollars. They have become more dominant in what is estimated to be an up to two-trillion-dollar annual global advertising and marketing business. Facebook alone generates more ad dollars than all of America’s newspapers, and Google has twice the ad revenues of Facebook.

856

Why the Facebook ‘scandal’ impacts you more than you think

It’s not just the data you choose to share.

By now we all know the story: Facebook allowed apps on its social media platform which enabled a shady outfit called Cambridge Analytica to scrape the profiles of 87 million users, in order to serve up targeted ads to benefit the Trump election campaign in 2016.  More than 300,000 Australian users of Facebook were caught up in the data harvesting.

But serving up ads in a foreign election campaign is not the whole story.  Facebook, and other companies involved in data mining, are invading our privacy and harming us economically and socially, in ways that are only just starting to become clear.

And it’s not just the data you choose to share. The information you post is not the whole story.  It’s only the tip of the iceberg of data that Facebook has collected about you.

Every time you go online you leave a trail of digital breadcrumbs.  Facebook has been busily sweeping up those breadcrumbs, and using them to categorise and profile you.  Facebook obviously knows when you click on a Facebook ‘like’ button; but also, unless a web developer has gone out of their way to find tools to block them (as we have done for our Salinger Privacy blog), Facebook knows every time you simply look at a website that has a Facebook ‘like’ button somewhere on it.

So if you only post or ‘like’ stories about inspirational mountain climbers and funny cat videos, but also do things online that you don’t share with your family, friends or work colleagues (like looking at stories about abortion or dealing with infidelity, Googling how to manage anxiety or erectile dysfunction, whingeing about your employer in a chatroom, or spending hours reviewing dating profiles, gambling or shopping obsessively for shoes)  — Facebook has you pegged anyway.

Plus, Facebook obtains data from other sources which know about your offline purchases, to build an even richer picture of who you really are.  And of course, Facebook may have access to your address book, your location history, the contents of your private messages, and depending on your brand of phone, possibly even a history of your phone calls and text messages.

All that information is used to draw inferences and assumptions about your preferences, and predict your likely behaviour.  The results are then used to categorise, profile and ultimately target you, in a process usually described as ‘online behavioural advertising’.

It’s not ‘just ads’

The objective of online behavioural advertising is to predict your purchasing interests and drive a purchase decision.  So far, the same as any other advertising.  But online, the implications for us as individuals are much greater.

Facebook’s promise to advertisers is that it can show their ad to exactly who the advertiser wants, and exclude everybody else.

However, by allowing exclusion, the platform also allows discrimination.  Facebook has been caught allowing advertisers to target — and exclude — people on the basis of their ‘ethnic affinity’, amongst other social, demographic, racial and religious characteristics.  So a landlord with an ad for rental housing could prevent people profiled as ‘single mothers’ from ever seeing their ad.  An employer could prevent people identifying as Jewish from seeing a job ad.  A bank could prevent people categorised as African Americans from seeing an ad for a home loan.

Existing patterns of social exclusion, economic inequality and discrimination are further entrenched by micro-targeted advertising, which is hidden from public view and regulatory scrutiny.

Data boy. Mark Zuckerberg testifies in Washington. Image: Getty.

Predictive analytics can narrow or alter your life choices

Once we move beyond straight-up advertising and into predictive analytics, the impact on individual autonomy becomes more acute.  Big Data feeds machine learning, which finds patterns in the data, from which new rules (algorithms) are designed.  Algorithms predict how a person will behave, and suggest how they should be treated.

Algorithms can lead to price discrimination, like surge pricing based on Uber knowing how much phone battery life you have left.  Or market exclusion, like Woolworths only offering car insurance to customers it has decided are low risk, based on an assessment of the groceries they buy.

Banks have been predicting the risk of a borrower defaulting on a loan for decades, but now algorithms are also used to determine who to hire, predict when a customer is pregnant, and deliver targeted search results to influence how you vote.

Algorithms are also being used to predict the students at risk of failure, the prisoners at risk of re-offending, and who is at risk of suicide and then launching interventions accordingly.  However, even leaving aside the accuracy of those predictions, interventions are not necessarily well-intentioned.  It was revealed last year that Australian Facebook executives were touting to advertisers their ability to target psychologically vulnerable teenagers. 

Automated decision-making diminishes our autonomy, by narrowing or altering our market and life choices, in ways that are not clear to us.  People already in a position of economic or social disadvantage face the additional challenge of trying to disprove or beat an invisible algorithm.

In a predictive and pre-emptive world, empathy, forgiveness, rehabilitation, redemption, individual dignity, autonomy and free will are programmed out of our society.

Fiddling with users’ privacy settings on Facebook won’t fix anything.  If we want our lives to be ruled by human values and individual dignity, instead of by machines fed on questionable data, we need robust, enforced and globally effective privacy laws.

A new European privacy law commences later this month.  The obligations include that businesses and governments must offer understandable explanations of how their algorithms work, and allow people to seek human review of automated decision-making.  This is a step in the right direction, which Australia, the US and the rest of the world should follow.

965