Why the Facebook ‘scandal’ impacts you more than you think
It’s not just the data you choose to share.
By now we all know the story: Facebook allowed apps on its social media platform which enabled a shady outfit called Cambridge Analytica to scrape the profiles of 87 million users, in order to serve up targeted ads to benefit the Trump election campaign in 2016. More than 300,000 Australian users of Facebook were caught up in the data harvesting.
But serving up ads in a foreign election campaign is not the whole story. Facebook, and other companies involved in data mining, are invading our privacy and harming us economically and socially, in ways that are only just starting to become clear.
And it’s not just the data you choose to share. The information you post is not the whole story. It’s only the tip of the iceberg of data that Facebook has collected about you.
Every time you go online you leave a trail of digital breadcrumbs. Facebook has been busily sweeping up those breadcrumbs, and using them to categorise and profile you. Facebook obviously knows when you click on a Facebook ‘like’ button; but also, unless a web developer has gone out of their way to find tools to block them (as we have done for our Salinger Privacy blog), Facebook knows every time you simply look at a website that has a Facebook ‘like’ button somewhere on it.
So if you only post or ‘like’ stories about inspirational mountain climbers and funny cat videos, but also do things online that you don’t share with your family, friends or work colleagues (like looking at stories about abortion or dealing with infidelity, Googling how to manage anxiety or erectile dysfunction, whingeing about your employer in a chatroom, or spending hours reviewing dating profiles, gambling or shopping obsessively for shoes) — Facebook has you pegged anyway.
Plus, Facebook obtains data from other sources which know about your offline purchases, to build an even richer picture of who you really are. And of course, Facebook may have access to your address book, your location history, the contents of your private messages, and depending on your brand of phone, possibly even a history of your phone calls and text messages.
All that information is used to draw inferences and assumptions about your preferences, and predict your likely behaviour. The results are then used to categorise, profile and ultimately target you, in a process usually described as ‘online behavioural advertising’.
It’s not ‘just ads’
The objective of online behavioural advertising is to predict your purchasing interests and drive a purchase decision. So far, the same as any other advertising. But online, the implications for us as individuals are much greater.
Facebook’s promise to advertisers is that it can show their ad to exactly who the advertiser wants, and exclude everybody else.
However, by allowing exclusion, the platform also allows discrimination. Facebook has been caught allowing advertisers to target — and exclude — people on the basis of their ‘ethnic affinity’, amongst other social, demographic, racial and religious characteristics. So a landlord with an ad for rental housing could prevent people profiled as ‘single mothers’ from ever seeing their ad. An employer could prevent people identifying as Jewish from seeing a job ad. A bank could prevent people categorised as African Americans from seeing an ad for a home loan.
Existing patterns of social exclusion, economic inequality and discrimination are further entrenched by micro-targeted advertising, which is hidden from public view and regulatory scrutiny.
Predictive analytics can narrow or alter your life choices
Once we move beyond straight-up advertising and into predictive analytics, the impact on individual autonomy becomes more acute. Big Data feeds machine learning, which finds patterns in the data, from which new rules (algorithms) are designed. Algorithms predict how a person will behave, and suggest how they should be treated.
Algorithms can lead to price discrimination, like surge pricing based on Uber knowing how much phone battery life you have left. Or market exclusion, like Woolworths only offering car insurance to customers it has decided are low risk, based on an assessment of the groceries they buy.
Banks have been predicting the risk of a borrower defaulting on a loan for decades, but now algorithms are also used to determine who to hire, predict when a customer is pregnant, and deliver targeted search results to influence how you vote.
Algorithms are also being used to predict the students at risk of failure, the prisoners at risk of re-offending, and who is at risk of suicide and then launching interventions accordingly. However, even leaving aside the accuracy of those predictions, interventions are not necessarily well-intentioned. It was revealed last year that Australian Facebook executives were touting to advertisers their ability to target psychologically vulnerable teenagers.
Automated decision-making diminishes our autonomy, by narrowing or altering our market and life choices, in ways that are not clear to us. People already in a position of economic or social disadvantage face the additional challenge of trying to disprove or beat an invisible algorithm.
In a predictive and pre-emptive world, empathy, forgiveness, rehabilitation, redemption, individual dignity, autonomy and free will are programmed out of our society.
Fiddling with users’ privacy settings on Facebook won’t fix anything. If we want our lives to be ruled by human values and individual dignity, instead of by machines fed on questionable data, we need robust, enforced and globally effective privacy laws.
A new European privacy law commences later this month. The obligations include that businesses and governments must offer understandable explanations of how their algorithms work, and allow people to seek human review of automated decision-making. This is a step in the right direction, which Australia, the US and the rest of the world should follow.