Resources

Facebook Button is Disappearing From Websites as Consumers Demand Better Privacy

Other big brands, including Best Buy, Ford Motor, Pottery Barn, Nike, Patagonia, Match and Amazon’s video-streaming service Twitch have removed the ability to sign on with Facebook. It’s a marked departure from just a few years ago, when the Facebook login was plastered all over the internet, often alongside buttons that let you sign in with Google, Twitter or LinkedIn. Jen Felch, Dell’s chief digital and chief information officer, said people stopped using social logins, for reasons that include concerns over security, privacy and data-sharing.

309

Facebook Used Facial Recognition Without Consent 200K Times, Says Watchdog

Another [$22,000] penalty was issued for illegally collecting social security numbers, not issuing notifications regarding personal information management changes, and other missteps. Facebook has been ordered to destroy facial information collected without consent or obtain consent, and was prohibited from processing identity numbers without legal basis. It was also ordered to destroy collected data and disclose contents related to foreign migration of personal information. Zuck’s brainchild was then told to make it easier for users to check legal notices regarding personal information. The fine is the second-largest ever issued by the organization, the largest ever also going to Facebook. In November 2020 the Social Network was fined [$5.7 million] for passing on personal data to other operators without user permission.

Netflix’s fine was a paltry [$188,000], with that sum imposed for collecting data from five million people without their consent, plus another [$2,700] for not disclosing international transfer of the data. Google got off the easiest, with just a “recommendation” to improve its personal data handling processes and make legal notices more precise. The PPIC said it is not done investigating methods of collecting personal information from overseas businesses and will continue with a legal review.

347

Privacy.net

Privacy.net exists to help guard your privacy and security online. It highlights some of the violations of privacy by governments, corporations and hackers that most of the general public either ignore or simply are not aware of.

591

Xiaomi Camera Feed is Showing Random Homes on a Google Nest Hub, Including Still Images of Sleeping People

So-called “smart” security cameras have had some pretty dumb security problems recently, but a recent report regarding a Xiaomi Mijia camera linked to a Google Home is especially disturbing. One Xiaomi Mijia camera owner is getting still images from other random peoples’ homes when trying to stream content from his camera to a Google Nest Hub. The images include sills of people sleeping (even an infant in a cradle) inside their own homes. This issue was first reported by user /r/Dio-V on Reddit and affects his Xiaomi Mijia 1080p Smart IP Security Camera, which can be linked to a Google account for use with Google/Nest devices through Xiaomi’s Mi Home app/service. It isn’t clear when Dio-V’s feed first began showing these still images into random homes or how long the camera was connected to his account before this started happening. He does state that both the Nest Hub and the camera were purchased new. The camera was noted as running firmware version 3.5.1_00.66.

648

Facebook, Google Donate Heavily To Privacy Advocacy Groups

Few companies have more riding on proposed privacy legislation than Alphabet’s Google and Facebook. To try to steer the bill their way, the giant advertising technology companies spend millions of dollars to lobby each year, a fact confirmed by government filings. Not so well-documented is spending to support highly influential think tanks and public interest groups that are helping shape the privacy debate, ostensibly as independent observers. Bloomberg Law examined seven prominent nonprofit think tanks that work on privacy issues that received a total of $1.5 million over a 18-month period ending Dec. 31, 2018. The groups included such organizations as the Center for Democracy and Technology, the Future of Privacy Forum and the Brookings Institution. The actual total is undoubtedly much higher — exact totals for contributions were difficult to pin down. The tech giants have “funded scores of nonprofits, including consumer and privacy groups, and academics,” said Jeffrey Chester, executive director at the Center for Digital Democracy, a public interest group that does not accept donations from Google or Facebook. Further, he says, their influence is strong. The companies have “opposed federal privacy laws and worked to weaken existing safeguards,” Chester said. Accepting donations from these “privacy-killing companies enable them to influence decisions by nonprofits, even subtly,” he said.

605

Google Loans Cameras To Volunteers To Fill Gaps in ‘Street View’

Kanhema, who works as a product manager in Silicon Valley and is a freelance photographer in his spare time, volunteered to carry Google’s Street View gear to map what amounted to 2,000 miles of his home country. The Berkeley, Calif., resident has filled in the map of other areas in Africa and Canada as well.

“We start in the large metropolitan areas where we know we have users, where it’s easy for us to drive and we can execute quickly,” says Stafford Marquardt, a product manager for Street View.

He says the team is working to expand the service’s reach. To do that, Google often relies on volunteers who can either borrow the company’s camera equipment or take photos using their own. Most images on Street View are collected by drivers, and most of these drivers are employed by third parties that work with Google. But when it comes to the places Google hasn’t prioritized, people like Kanhema can fill in the gaps.

“It’s so conspicuous to have a 4-foot contraption attached to the roof of your car,” Kanhema says. “People are walking up and asking questions about, ‘Is that a camera? What are you recording? What are you filming? It is for Google Maps? Will my house be on the map? Will my face be on the map?'”

Google doesn’t pay him or the other volunteers — whom the company calls “contributors” — for the content they upload. Kanhema, for example, spent around $5,000 of his own money to travel across Zimbabwe for the project.

Google currently has no plans to compensate its volunteers, adding that it pays contributors “in a lot of other ways” by offering “a platform to host gigabytes and terabytes of imagery and publish it to the entire world, absolutely for free.”

655

You’re very easy to track down, even when your data has been anonymized

The most common way public agencies protect our identities is anonymization. This involves stripping out obviously identifiable things such as names, phone numbers, email addresses, and so on. Data sets are also altered to be less precise, columns in spreadsheets are removed, and “noise” is introduced to the data. Privacy policies reassure us that this means there’s no risk we could be tracked down in the database. However, a new study in Nature Communications suggests this is far from the case. Researchers from Imperial College London and the University of Louvain have created a machine-learning model that estimates exactly how easy individuals are to reidentify from an anonymized data set. You can check your own score here, by entering your zip code, gender, and date of birth.

On average, in the U.S., using those three records, you could be correctly located in an “anonymized” database 81% of the time. Given 15 demographic attributes of someone living in Massachusetts, there’s a 99.98% chance you could find that person in any anonymized database. The tool was created by assembling a database of 210 different data sets from five sources, including the U.S. Census. The researchers fed this data into a machine-learning model, which learned which combinations are more nearly unique and which are less so, and then assigns the probability of correct identification.

726

How America’s Tech Giants Are Helping Build China’s Surveillance State

The OpenPower Foundation — a nonprofit led by Google and IBM executives with the aim of trying to “drive innovation” — has set up a collaboration between IBM, Chinese company Semptian, and U.S. chip manufacturer Xilinx. Together, they have worked to advance a breed of microprocessors that enable computers to analyze vast amounts of data more efficiently. Shenzhen-based Semptian is using the devices to enhance the capabilities of internet surveillance and censorship technology it provides to human rights-abusing security agencies in China, according to sources and documents. A company employee said that its technology is being used to covertly monitor the internet activity of 200 million people…

Semptian presents itself publicly as a “big data” analysis company that works with internet providers and educational institutes. However, a substantial portion of the Chinese firm’s business is in fact generated through a front company named iNext, which sells the internet surveillance and censorship tools to governments. iNext operates out of the same offices in China as Semptian, with both companies on the eighth floor of a tower in Shenzhen’s busy Nanshan District. Semptian and iNext also share the same 200 employees and the same founder, Chen Longsen. [The company’s] Aegis equipment has been placed within China’s phone and internet networks, enabling the country’s government to secretly collect people’s email records, phone calls, text messages, cellphone locations, and web browsing histories, according to two sources familiar with Semptian’s work.

Promotional documents obtained from the company promise “location information for everyone in the country.” One company representative even told the Intercept they were processing “thousands of terabits per second,” and — not knowing they were talking to a reporter — forwarded a 16-minute video detailing their technology. “If a government operative enters a person’s cellphone number, Aegis can show where the device has been over a given period of time: the last three days, the last week, the last month, or longer,” the Intercept reports.

723

Do Google and Facebook Threaten Our ‘Ambient Privacy’?

Until recently, ambient privacy was a simple fact of life. Recording something for posterity required making special arrangements, and most of our shared experience of the past was filtered through the attenuating haze of human memory. Even police states like East Germany, where one in seven citizens was an informer, were not able to keep tabs on their entire population. Today computers have given us that power. Authoritarian states like China and Saudi Arabia are using this newfound capacity as a tool of social control. Here in the United States, we’re using it to show ads. But the infrastructure of total surveillance is everywhere the same, and everywhere being deployed at scale….

Because our laws frame privacy as an individual right, we don’t have a mechanism for deciding whether we want to live in a surveillance society. Congress has remained silent on the matter, with both parties content to watch Silicon Valley make up its own rules. The large tech companies point to our willing use of their services as proof that people don’t really care about their privacy. But this is like arguing that inmates are happy to be in jail because they use the prison library. Confronted with the reality of a monitored world, people make the rational decision to make the best of it.

That is not consent…

Our discourse around privacy needs to expand to address foundational questions about the role of automation: To what extent is living in a surveillance-saturated world compatible with pluralism and democracy? What are the consequences of raising a generation of children whose every action feeds into a corporate database? What does it mean to be manipulated from an early age by machine learning algorithms that adaptively learn to shape our behavior? That is not the conversation Facebook or Google want us to have. Their totalizing vision is of a world with no ambient privacy and strong data protections, dominated by the few companies that can manage to hoard information at a planetary scale. They correctly see the new round of privacy laws as a weapon to deploy against smaller rivals, further consolidating their control over the algorithmic panopticon.

731

Facebook says there’s “No Expectation of Privacy On Social Media”

Facebook wants to dismiss a lawsuit stemming from the Cambridge Analytica scandal by arguing that it didn’t violate users’ privacy rights because there’s no expectation of privacy when using social media. “There is no invasion of privacy at all, because there is no privacy,” Facebook counsel Orin Snyder said during a pretrial hearing to dismiss a lawsuit stemming from the Cambridge Analytica scandal, according to Law 360. The company reportedly didn’t deny that third parties accessed users’ data, but it instead told U.S. District Judge Vince Chhabria that there’s no “reasonable expectation of privacy” on Facebook or any other social media site.

735

Facebook Contractors Categorize Your Private Posts To Train AI

Facebook uses thousands of third-party staffers around the world to look at Facebook and Instagram posts to help train its AI and to inform new products. “But because the contractors see users’ public and private posts, some view it as a violation of privacy.”

According to Reuters, as many as 260 contract workers in Hyderabad, India have spent more than a year labeling millions of Facebook posts dating back to 2014. They look for the subject of the post, the occasion and the author’s intent, and Facebook told Reuters, the company uses that information to develop new features and to potentially increase usage and ad revenue. Around the globe, Facebook has as many as 200 similar content labeling projects, many of which are used to train the company’s AI.

The contractors working in Hyderabad told Reuters they see everything from text-based status updates to videos, photos and Stories across Facebook and Instagram — including those that are shared privately. And even as Facebook embarks on its “the future is private” platform, one Facebook employee told Reuters he can’t imagine the practice going away. It’s a core part of training AI and developing the company’s products.

692

‘The goal is to automate us’: welcome to the age of surveillance capitalism

The behaviour of the digital giants looks rather different from the roseate hallucinations of Wired magazine. What one sees instead is a colonising ruthlessness of which John D Rockefeller would have been proud. First of all there was the arrogant appropriation of users’ behavioural data – viewed as a free resource, there for the taking. Then the use of patented methods to extract or infer data even when users had explicitly denied permission, followed by the use of technologies that were opaque by design and fostered user ignorance.

And, of course, there is also the fact that the entire project was conducted in what was effectively lawless – or at any rate law-free – territory. Thus Google decided that it would digitise and store every book ever printed, regardless of copyright issues. Or that it would photograph every street and house on the planet without asking anyone’s permission. Facebook launched its infamous “beacons”, which reported a user’s online activities and published them to others’ news feeds without the knowledge of the user. And so on, in accordance with the disrupter’s mantra that “it is easier to ask for forgiveness than for permission”.

The combination of state surveillance and its capitalist counterpart means that digital technology is separating the citizens in all societies into two groups: the watchers (invisible, unknown and unaccountable) and the watched. This has profound consequences for democracy because asymmetry of knowledge translates into asymmetries of power.

830

Dutch Government Report Says Microsoft Office Telemetry Collection Breaks EU GDPR Laws

Microsoft broke Euro privacy rules by carrying out the “large scale and covert” gathering of private data through its Office apps, according to a report commissioned by the Dutch government.

It was found that Microsoft was collecting telemetry and other content from its Office applications, including email titles and sentences where translation or spellchecker was used, and secretly storing the data on systems in the United States.

Those actions break Europe’s new GDPR privacy safeguards, it is claimed, and may put Microsoft on the hook for potentially tens of millions of dollars in fines. The Dutch authorities are working with the corporation to fix the situation, and are using the threat of a fine as a stick to make it happen.

The investigation was jumpstarted by the fact that Microsoft doesn’t publicly reveal what information it gathers on users and doesn’t provide an option for turning off diagnostic and telemetry data sent by its Office software to the company as a way of monitoring how well it is functioning and identifying any software issues.

832

Only 22% of Americans Now Trust Facebook’s Handling of Personal Info

Facebook is the least trustworthy of all major tech companies when it comes to safeguarding user data, according to a new national poll conducted for Fortune, highlighting the major challenges the company faces following a series of recent privacy blunders. Only 22% of Americans said that they trust Facebook with their personal information, far less than Amazon (49%), Google (41%), Microsoft (40%), and Apple (39%).

In question after question, respondents ranked the company last in terms of leadership, ethics, trust, and image… Public mistrust extended to Zuckerberg, Facebook’s public face during its privacy crisis and who once said that Facebook has “a responsibility to protect your information, If we can’t, we don’t deserve it.” The company subsequently fell victim to a hack but continued operating as usual, including debuting a video-conferencing device intended to be used in people’s living rooms or kitchens and that further extends Facebook’s reach into more areas outside of personal computers and smartphones. Only 59% of respondents said they were “at least somewhat confident” in Zuckerberg’s leadership in the ethical use of data and privacy information, ranking him last among four other tech CEOS…

As for Facebook, the social networking giant may have a difficult time regaining public trust because of its repeated problems. Consumers are more likely to forgive a company if they believe a problem was an aberration rather than a systemic failure by its leadership, Harris Poll CEO John Gerzema said.

The article concludes that “For now, the public isn’t in a forgiving mood when it comes to Facebook and Zuckerberg.”

767

Google Exposed Private Data of Hundreds of Thousands of Google+ Users and Then Opted Not To Disclose The Breach

Google exposed the private data of hundreds of thousands of users of the Google+ social network and then opted not to disclose the issue this past spring, in part because of fears that doing so would draw regulatory scrutiny and cause reputational damage. As part of its response to the incident, the Alphabet unit plans to announce a sweeping set of data privacy measures that include permanently shutting down all consumer functionality of Google+, the people said. The move effectively puts the final nail in the coffin of a product that was launched in 2011 to challenge Facebook and is widely seen as one of Google’s biggest failures.

A software glitch in the social site gave outside developers potential access to private Google+ profile data between 2015 and March 2018, when internal investigators discovered and fixed the issue, according to the documents and people briefed on the incident. A memo reviewed by the Journal prepared by Google’s legal and policy staff and shared with senior executives warned that disclosing the incident would likely trigger “immediate regulatory interest” and invite comparisons to Facebook’s leak of user information to data firm Cambridge Analytica.

697

Instagram is testing the ability to share your precise location history with Facebook

Revealed just weeks after Instagram’s co-founders left the company, Instagram is currently testing a feature that would allow it to share your location data with Facebook, even when you’re not using the app.

Instagram is not the only service that Facebook has sought to share data between. Back in 2016 the company announced that it would be sharing user data between WhatsApp and Facebook in order to offer better friend suggestions. The practice was later halted in the European Union thanks to its GDPR legislation, although WhatsApp’s CEO and co-founder later left over data privacy concerns.

Facebook is also reportedly testing a map view to see friend’s locations, similar to what’s already offered by Snapchat. Instagram’s data sharing could provide additional data points to power this functionality, while providing Facebook with more data to better target its ads.

796

Proposed Toronto development from Google’s Sidewalk Labs sparks concerns over data

Heated streets will melt ice and snow on contact. Sensors will monitor traffic and protect pedestrians. Driverless shuttles will carry people to their doors.

A unit of Google’s parent company Alphabet is proposing to turn a rundown part of Toronto’s waterfront into what may be the most wired community in history — to “fundamentally refine what urban life can be.”

Dan Doctoroff, the CEO of Sidewalk Labs, envisions features like pavement that lights up to warn pedestrians of approaching streetcars. Flexible heated enclosures — described as “raincoats” for buildings — will be deployed based on weather data during Toronto’s bitter winters. Robotic waste-sorting systems will detect when a garbage bin is full and remove it before raccoons descend.

“Those are great uses of data that can improve the quality of life of people,′ he said. “That’s what we want to do.”

But some Canadians are rethinking the privacy implications of giving one of the most data-hungry companies on the planet the means to wire up everything from street lights to pavement.

The concerns have intensified following a series of privacy scandals at Facebook and Google. A recent Associated Press investigation found that many Google services on iPhones and Android devices store location-tracking data even if you use privacy settings that are supposed to turn them off.

Adam Vaughan, the federal lawmaker whose district includes the development, said debate about big data and urban infrastructure is coming to cities across the world and he would rather have Toronto at the forefront of discussion.

“Google is ahead of governments globally and locally. That’s a cause for concern but it’s also an opportunity,” Vaughan said.

762

GCHQ mass surveillance violated human rights, court rules

GCHQ’s methods in carrying out bulk interception of online communications violated privacy and failed to provide sufficient surveillance safeguards, the European court of human rights (ECHR) has ruled in a test case judgment.

But the court found that GCHQ’s regime for sharing sensitive digital intelligence with foreign governments was not illegal.

It is the first major challenge to the legality of UK intelligence agencies intercepting private communications in bulk, following Edward Snowden’s whistleblowing revelations. The long-awaited ruling is one of the most comprehensive assessments by the ECHR of the legality of the interception operations operated by UK intelligence agencies.

The case was brought by a coalition of 14 human rights groups, privacy organisations and journalists, including Amnesty International, Liberty, Privacy International and Big Brother Watch. In a statement, published on Amnesty’s website, Lucy Claridge, Amnesty International’s Strategic Litigation Director, said, today’s ruling “represents a significant step forward in the protection of privacy and freedom of expression worldwide. It sends a strong message to the UK Government that its use of extensive surveillance powers is abusive and runs against the very principles that it claims to be defending.” He added: “This is particularly important because of the threat that Government surveillance poses to those who work in human rights and investigative journalism, people who often risk their own lives to speak out. Three years ago, this same case forced the UK Government to admit GCHQ had been spying on Amnesty — a clear sign that our work and the people we work alongside had been put at risk.”

The judges considered three aspects of digital surveillance: bulk interception of communications, intelligence sharing and obtaining of communications data from communications service providers. By a majority of five to two votes, the Strasbourg judges found that GCHQ’s bulk interception regime violated article 8 of the European convention on human rights, which guarantees privacy, because there were said to be insufficient safeguards, and rules governing the selection of “related communications data” were deemed to be inadequate.

744

Banks and Retailers Are Tracking How You Type, Swipe and Tap

When you’re browsing a website and the mouse cursor disappears, it might be a computer glitch — or it might be a deliberate test to find out who you are.

The way you press, scroll and type on a phone screen or keyboard can be as unique as your fingerprints or facial features. To fight fraud, a growing number of banks and merchants are tracking visitors’ physical movements as they use websites and apps.

The data collection is invisible to those being watched. Using sensors in your phone or code on websites, companies can gather thousands of data points, known as “behavioral biometrics.”
 


A phone’s touchscreen sensors can track where and how you swipe your device to help determine who you are.

 


The angle at which you hold your device is one of the many biometric markers that can be measured.

 

Behavioral monitoring software churns through thousands of elements to calculate a probability-based guess about whether a person is who they claim. Two major advances have fed its growing use: the availability of cheap computing power and the sophisticated array of sensors now built into most smartphones.

The system’s unobtrusiveness is part of its appeal, Mr. Hanley said. Traditional physical biometrics, like fingerprints or irises, require special scanning hardware for authentication. But behavioral traits can be captured in the background, without customers doing anything to sign up.

BioCatch occasionally tries to elicit a reaction. It can speed up the selection wheel you use to enter data like dates and times on your phone, or make your mouse cursor disappear for a fraction of a second.

“Everyone reacts a little differently to that,” said Frances Zelazny, BioCatch’s chief strategy and marketing officer. “Some people move the mouse side to side; some people move it up and down. Some bang on the keyboard.”

Because your reaction is so individual, it’s hard for a fraudulent user to fake. And because customers never know the monitoring technology is there, it doesn’t impose the kind of visible, and irritating, roadblocks that typically accompany security tests. You don’t need to press your thumb on your phone’s fingerprint reader or type in an authentication code.
 


Biometric software can also determine the pressure you tend to apply to your phone when you tap and type.

“We don’t have to sit people down in a room and get them to type under perfect laboratory conditions,” said Neil Costigan, the chief executive of BehavioSec, a Palo Alto, Calif., company that makes software used by many Nordic banks. “You just watch them, silently, while they go about their normal account activities.”

976

Why the Facebook ‘scandal’ impacts you more than you think

It’s not just the data you choose to share.

By now we all know the story: Facebook allowed apps on its social media platform which enabled a shady outfit called Cambridge Analytica to scrape the profiles of 87 million users, in order to serve up targeted ads to benefit the Trump election campaign in 2016.  More than 300,000 Australian users of Facebook were caught up in the data harvesting.

But serving up ads in a foreign election campaign is not the whole story.  Facebook, and other companies involved in data mining, are invading our privacy and harming us economically and socially, in ways that are only just starting to become clear.

And it’s not just the data you choose to share. The information you post is not the whole story.  It’s only the tip of the iceberg of data that Facebook has collected about you.

Every time you go online you leave a trail of digital breadcrumbs.  Facebook has been busily sweeping up those breadcrumbs, and using them to categorise and profile you.  Facebook obviously knows when you click on a Facebook ‘like’ button; but also, unless a web developer has gone out of their way to find tools to block them (as we have done for our Salinger Privacy blog), Facebook knows every time you simply look at a website that has a Facebook ‘like’ button somewhere on it.

So if you only post or ‘like’ stories about inspirational mountain climbers and funny cat videos, but also do things online that you don’t share with your family, friends or work colleagues (like looking at stories about abortion or dealing with infidelity, Googling how to manage anxiety or erectile dysfunction, whingeing about your employer in a chatroom, or spending hours reviewing dating profiles, gambling or shopping obsessively for shoes)  — Facebook has you pegged anyway.

Plus, Facebook obtains data from other sources which know about your offline purchases, to build an even richer picture of who you really are.  And of course, Facebook may have access to your address book, your location history, the contents of your private messages, and depending on your brand of phone, possibly even a history of your phone calls and text messages.

All that information is used to draw inferences and assumptions about your preferences, and predict your likely behaviour.  The results are then used to categorise, profile and ultimately target you, in a process usually described as ‘online behavioural advertising’.

It’s not ‘just ads’

The objective of online behavioural advertising is to predict your purchasing interests and drive a purchase decision.  So far, the same as any other advertising.  But online, the implications for us as individuals are much greater.

Facebook’s promise to advertisers is that it can show their ad to exactly who the advertiser wants, and exclude everybody else.

However, by allowing exclusion, the platform also allows discrimination.  Facebook has been caught allowing advertisers to target — and exclude — people on the basis of their ‘ethnic affinity’, amongst other social, demographic, racial and religious characteristics.  So a landlord with an ad for rental housing could prevent people profiled as ‘single mothers’ from ever seeing their ad.  An employer could prevent people identifying as Jewish from seeing a job ad.  A bank could prevent people categorised as African Americans from seeing an ad for a home loan.

Existing patterns of social exclusion, economic inequality and discrimination are further entrenched by micro-targeted advertising, which is hidden from public view and regulatory scrutiny.

Data boy. Mark Zuckerberg testifies in Washington. Image: Getty.

Predictive analytics can narrow or alter your life choices

Once we move beyond straight-up advertising and into predictive analytics, the impact on individual autonomy becomes more acute.  Big Data feeds machine learning, which finds patterns in the data, from which new rules (algorithms) are designed.  Algorithms predict how a person will behave, and suggest how they should be treated.

Algorithms can lead to price discrimination, like surge pricing based on Uber knowing how much phone battery life you have left.  Or market exclusion, like Woolworths only offering car insurance to customers it has decided are low risk, based on an assessment of the groceries they buy.

Banks have been predicting the risk of a borrower defaulting on a loan for decades, but now algorithms are also used to determine who to hire, predict when a customer is pregnant, and deliver targeted search results to influence how you vote.

Algorithms are also being used to predict the students at risk of failure, the prisoners at risk of re-offending, and who is at risk of suicide and then launching interventions accordingly.  However, even leaving aside the accuracy of those predictions, interventions are not necessarily well-intentioned.  It was revealed last year that Australian Facebook executives were touting to advertisers their ability to target psychologically vulnerable teenagers. 

Automated decision-making diminishes our autonomy, by narrowing or altering our market and life choices, in ways that are not clear to us.  People already in a position of economic or social disadvantage face the additional challenge of trying to disprove or beat an invisible algorithm.

In a predictive and pre-emptive world, empathy, forgiveness, rehabilitation, redemption, individual dignity, autonomy and free will are programmed out of our society.

Fiddling with users’ privacy settings on Facebook won’t fix anything.  If we want our lives to be ruled by human values and individual dignity, instead of by machines fed on questionable data, we need robust, enforced and globally effective privacy laws.

A new European privacy law commences later this month.  The obligations include that businesses and governments must offer understandable explanations of how their algorithms work, and allow people to seek human review of automated decision-making.  This is a step in the right direction, which Australia, the US and the rest of the world should follow.

949