Resources

FTC Should Probe Payroll Data Deals By Brokers Like Equifax

For decades, companies such as Equifax have acquired employee work histories and compensation data from employers to help lenders, landlords, hiring managers and other customers perform background checks of individuals. But these big databases are vulnerable to theft and error, and workers are sometimes surprised their records are included, according to privacy activists. Equifax said it follows all laws and welcomes additional voices in the industry. In the letter to the FTC, San Mateo, California-based startup Certree said that Equifax and Experian are providing financial incentives like a slice of their revenue to employers to gain exclusive access to payroll data. Equifax also has deals with payroll software vendors that help employers process paychecks. The letter describes the agreements as anticompetitive and potentially unlawful.

134

TikTok Tracks You Across the Web, Even If You Don’t Use the App

A Consumer Reports investigation finds that TikTok, one of the country’s most popular apps, is partnering with a growing number of other companies to hoover up data about people as they travel across the internet. That includes people who don’t have TikTok accounts. These companies embed tiny TikTok trackers called “pixels” in their websites. Then TikTok uses the information gathered by all those pixels to help the companies target ads at potential customers, and to measure how well their ads work. To look into TikTok’s use of online tracking, CR asked the security firm Disconnect to scan about 20,000 websites for the company’s pixels. In our list, we included the 1,000 most popular websites overall, as well as some of the biggest sites with domains ending in “.org,” “.edu,” and “.gov.” We wanted to look at those sites because they often deal with sensitive subjects. We found hundreds of organizations sharing data with TikTok.

If you go to the United Methodist Church’s main website, TikTok hears about it. Interested in joining Weight Watchers? TikTok finds that out, too. The Arizona Department of Economic Security tells TikTok when you view pages concerned with domestic violence or food assistance. Even Planned Parenthood uses the trackers, automatically notifying TikTok about every person who goes to its website, though it doesn’t share information from the pages where you can book an appointment. (None of those groups responded to requests for comment.) The number of TikTok trackers we saw was just a fraction of those we observed from Google and Meta. However, TikTok’s advertising business is exploding, and experts say the data collection will probably grow along with it.

After Disconnect researchers conducted a broad search for TikTok trackers, we asked them to take a close look at what kind of information was being shared by 15 specific websites. We focused on sites where we thought people would have a particular expectation of privacy, such as advocacy organizations and hospitals, along with retailers and other kinds of companies. Disconnect found that data being transmitted to TikTok can include your IP address, a unique ID number, what page you’re on, and what you’re clicking, typing, or searching for, depending on how the website has been set up. What does TikTok do with all that information? “Like other platforms, the data we receive from advertisers is used to improve the effectiveness of our advertising services,” says Melanie Bosselait, a TikTok spokesperson. The data “is not used to group individuals into particular interest categories for other advertisers to target.” If TikTok receives data about someone who doesn’t have a TikTok account, the company only uses that data for aggregated reports that they send to advertisers about their websites, she says. There’s no independent way for consumers or privacy researchers to verify such statements. But TikTok’s terms of service say its advertising customers aren’t allowed to send the company certain kinds of sensitive information, such as data about children, health conditions, or finances. “We continuously work with our partners to avoid inadvertent transmission of such data,” TikTok’s Bosselait says.

151

This law makes it illegal for companies to collect third-party data to profile you. But they do anyway.

When you purchase a product or service from a company, fill out an online form, or sign up for a newsletter, you might provide only the necessary data such as your name, email, delivery address and/or payment information.

That company may then turn to other retailers or data brokers to purchase or exchange extra data about you. This could include your age, family, health, habits and more.

This allows them to build a more detailed individual profile on you, which helps them predict your behaviour and more precisely target you with ads.

For almost ten years, there has been a law in Australia that makes this kind of data enrichment illegal if a company can “reasonably and practicably” request that information directly from the consumer. And at least one major data broker has asked the government to “remove” this law.

The burning question is: why is there not a single published case of this law being enforced against companies “enriching” customer data for profiling and targeting purposes?

147

Meta Sued For Violating Patient Privacy With Data Tracking Tool

Facebook’s parent company Meta and major US hospitals violated medical privacy laws with a tracking tool that sends health information to Facebook, two proposed class-action lawsuits allege. The lawsuits, filed in the Northern District of California in June and July, focus on the Meta Pixel tracking tool. The tool can be installed on websites to provide analytics on Facebook and Instagram ads. It also collects information about how people click around and input information into those websites.

An investigation by The Markup in early June found that 33 of the top 100 hospitals in the United States use the Meta Pixel on their websites. At seven hospitals, it was installed on password-protected patient portals. The investigation found that the tool was sending information about patient health conditions, doctor appointments, and medication allergies to Facebook.

140

Next in Google’s Quest for Consumer Dominance–Banking

The project, code-named Cache, is expected to launch next year with accounts run by Citigroup and a credit union at Stanford University, a tiny lender in Google’s backyard. Big tech companies see financial services as a way to get closer to users and glean valuable data. Apple introduced a credit card this summer. Amazon.com has talked to banks about offering checking accounts. Facebook is working on a digital currency it hopes will upend global payments. Their ambitions could challenge incumbent financial-services firms, which fear losing their primacy and customers. They are also likely to stoke a reaction in Washington, where regulators are already investigating whether large technology companies have too much clout.

The tie-ups between banking and technology have sometimes been fraught. Apple irked its credit-card partner, Goldman Sachs Group, by running ads that said the card was “designed by Apple, not a bank.” Major financial companies dropped out of Facebook’s crypto project after a regulatory backlash. Google’s approach seems designed to make allies, rather than enemies, in both camps. The financial institutions’ brands, not Google’s, will be front-and-center on the accounts, an executive told The Wall Street Journal. And Google will leave the financial plumbing and compliance to the banks — activities it couldn’t do without a license anyway.

554

Smart TVs Are Data-Collecting Machines, New Study Shows

A new study from Princeton University shows internet-connected TVs, which allow people to stream Netflix and Hulu, are loaded with data-hungry trackers. “If you use a device such as Roku and Amazon Fire TV, there are numerous companies that can build up a fairly comprehensive picture of what you’re watching,” Arvind Narayanan, associate professor of computer science at Princeton, wrote in an email to The Verge. “There’s very little oversight or awareness of their practices, including where that data is being sold.” From the report:
To understand how much surveillance is taking place on smart TVs, Narayanan and his co-author Hooman Mohajeri Moghaddam built a bot that automatically installed thousands of channels on their Roku and Amazon Fire TVs. It then mimicked human behavior by browsing and watching videos. As soon as it ran into an ad, it would track what data was being collected behind the scenes. Some of the information, like device type, city, and state, is hardly unique to one user. But other data, like the device serial number, Wi-Fi network, and advertising ID, could be used to pinpoint an individual. “This gives them a more complete picture of who you are,” said Moghaddam. He noted that some channels even sent unencrypted email addresses and video titles to the trackers.

In total, the study found trackers on 69 percent of Roku channels and 89 percent of Amazon Fire channels. “Some of these are well known, such as Google, while many others are relatively obscure companies that most of us have never heard of,” Narayanan said. Google’s ad service DoubleClick was found on 97 percent of Roku channels. “Like other publishers, smart TV app developers can use Google’s ad services to show ads against their content, and we’ve helped design industry guidelines for this that enable a privacy-safe experience for users,” a Google spokesperson said in a statement emailed to The Verge. “Depending on the user’s preferences, the developer may share data with Google that’s similar to data used for ads in mobile apps or on the web.”
“Better privacy controls would certainly help, but they are ultimately band-aids,” Narayanan said. “The business model of targeted advertising on TVs is incompatible with privacy, and we need to confront that reality. To maximize revenue, platforms based on ad targeting will likely turn to data mining and algorithmic personalization/persuasion to keep people glued to the screen as long as possible.”

Another study from Northeastern University and the Imperial College of London found that other smart-home devices are also collecting reams of data that is being sent to third parties like advertisers and major tech companies.

599

Cambridge Analytica Whistleblower: US Heading In ‘Same Direction As China’ With Online Privacy

“The United States is walking in the same direction as China, we’re just allowing private companies to monetize left, right and center,” Cambridge Analytica whistleblower Christopher Wylie told CNBC on Wednesday. “Just because it’s not the state doesn’t mean that there isn’t harmful impacts that could come if you have one or two large companies monitoring or tracking everything you do,” he said. CNBC reports:

Wylie, whose memoir came out this week, has become outspoken about the influence of social media companies due to the large amounts of data they collect. In March 2018, he exposed the Cambridge Analytica scandal that brought down his former employer and resulted in the Federal Trade Commission fining Facebook, 15 months later, $5 billion for mishandling. While Cambridge Analytica has since shut down, Wylie said the tactics it used could be deployed elsewhere, and that is why data privacy regulation needs to be dramatically enhanced.

“Even if the company has dissolved, the capabilities of the company haven’t,” he said. “My real concern is what happens if China becomes the next Cambridge Analytica, what happens if North Korea becomes the next Cambridge Analytica?” Wylie also said he believes that social media companies should, at a minimum, face regulation similar to water utilities or electrical companies — “certain industries that have become so important because of their vital importance to business and people’s lives and the nature of their scale.” In those cases, “we put in place rules that put consumers first,” he added. “You can still make a profit. You can still make money. But you have to consider the rights and safety of people.”

537

You’re very easy to track down, even when your data has been anonymized

The most common way public agencies protect our identities is anonymization. This involves stripping out obviously identifiable things such as names, phone numbers, email addresses, and so on. Data sets are also altered to be less precise, columns in spreadsheets are removed, and “noise” is introduced to the data. Privacy policies reassure us that this means there’s no risk we could be tracked down in the database. However, a new study in Nature Communications suggests this is far from the case. Researchers from Imperial College London and the University of Louvain have created a machine-learning model that estimates exactly how easy individuals are to reidentify from an anonymized data set. You can check your own score here, by entering your zip code, gender, and date of birth.

On average, in the U.S., using those three records, you could be correctly located in an “anonymized” database 81% of the time. Given 15 demographic attributes of someone living in Massachusetts, there’s a 99.98% chance you could find that person in any anonymized database. The tool was created by assembling a database of 210 different data sets from five sources, including the U.S. Census. The researchers fed this data into a machine-learning model, which learned which combinations are more nearly unique and which are less so, and then assigns the probability of correct identification.

636

Thanks To Facebook, Your Cellphone Company Is Watching You More Closely Than Ever

A confidential Facebook document reviewed by The Intercept shows that Facebook courts carriers, along with phone makers — some 100 different companies in 50 countries — by offering the use of even more surveillance data, pulled straight from your smartphone by Facebook itself.

Offered to select Facebook partners, the data includes not just technical information about Facebook members’ devices and use of Wi-Fi and cellular networks, but also their past locations, interests, and even their social groups. This data is sourced not just from the company’s main iOS and Android apps, but from Instagram and Messenger as well. The data has been used by Facebook partners to assess their standing against competitors, including customers lost to and won from them, but also for more controversial uses like racially targeted ads.

Some experts are particularly alarmed that Facebook has marketed the use of the information — and appears to have helped directly facilitate its use, along with other Facebook data — for the purpose of screening customers on the basis of likely creditworthiness. Such use could potentially run afoul of federal law, which tightly governs credit assessments. Facebook said it does not provide creditworthiness services and that the data it provides to cellphone carriers and makers does not go beyond what it was already collecting for other uses.

597

Most Facebook users don’t know that it records a list of their interests, new study finds

Seventy-four percent of Facebook users are unaware that Facebook records a list of their interests for ad-targeting purposes, according to a new study from the Pew Institute.

Participants in the study were first pointed to Facebook’s ad preferences page, which lists out a person’s interests. Nearly 60 percent of participants admitted that Facebook’s lists of interests were very or somewhat accurate to their actual interests, and 51 percent said they were uncomfortable with Facebook creating the list.

Facebook has weathered serious questions about its collection of personal information in recent years. CEO Mark Zuckerberg testified before Congress last year acknowledging privacy concerns and touching upon the company’s collection of personal information. While Zuckerberg said Facebook users have complete control over the information they upload and the information Facebook uses to actively target ads at its users, it’s clear from the Pew study that most people are not aware of Facebook’s collection tactics.

The Pew study also demonstrates that, while Facebook offers a number of transparency and data control tools, most users are not aware of where they should be looking. Even when the relevant information is located, there are often multiple steps to go through to delete assigned interests.

631

Facebook Filed A Patent To Predict Your Household’s Demographics Based On Family Photos

Facebook has submitted a patent application for technology that would predict who your family and other household members are, based on images and captions posted to Facebook, as well as your device information, like shared IP addresses. The application, titled “Predicting household demographics based on image data,” was originally filed May 10, 2017, and made public today.

The system Facebook proposes in its patent application would use facial recognition and learning models trained to understand text to help Facebook better understand whom you live with and interact with most. The technology described in the patent looks for clues in your profile pictures on Facebook and Instagram, as well as photos of you that you or your friends post.

It would note the people identified in a photo, and how frequently the people are included in your pictures. Then, it would assess information from comments on the photos, captions, or tags (#family, #mom, #kids) — anything that indicates whether someone is a husband, daughter, cousin, etc. — to predict what your family/household actually looks like. According to the patent application, Facebook’s prediction models would also analyze “messaging history, past tagging history, [and] web browsing history” to see if multiple people share IP addresses (a unique identifier for every internet network).

644

Only 22% of Americans Now Trust Facebook’s Handling of Personal Info

Facebook is the least trustworthy of all major tech companies when it comes to safeguarding user data, according to a new national poll conducted for Fortune, highlighting the major challenges the company faces following a series of recent privacy blunders. Only 22% of Americans said that they trust Facebook with their personal information, far less than Amazon (49%), Google (41%), Microsoft (40%), and Apple (39%).

In question after question, respondents ranked the company last in terms of leadership, ethics, trust, and image… Public mistrust extended to Zuckerberg, Facebook’s public face during its privacy crisis and who once said that Facebook has “a responsibility to protect your information, If we can’t, we don’t deserve it.” The company subsequently fell victim to a hack but continued operating as usual, including debuting a video-conferencing device intended to be used in people’s living rooms or kitchens and that further extends Facebook’s reach into more areas outside of personal computers and smartphones. Only 59% of respondents said they were “at least somewhat confident” in Zuckerberg’s leadership in the ethical use of data and privacy information, ranking him last among four other tech CEOS…

As for Facebook, the social networking giant may have a difficult time regaining public trust because of its repeated problems. Consumers are more likely to forgive a company if they believe a problem was an aberration rather than a systemic failure by its leadership, Harris Poll CEO John Gerzema said.

The article concludes that “For now, the public isn’t in a forgiving mood when it comes to Facebook and Zuckerberg.”

675

EU Ruling: Self-Driving Car Data Will Be Copyrighted By the Manufacturer

Yesterday, at a routine vote on regulations for self-driving cars, members of the European Peoples’ Party voted down a clause that would protect a vehicle’s telemetry so that it couldn’t become someone’s property. The clause affirmed that “data generated by autonomous transport are automatically generated and are by nature not creative, thus making copyright protection or the right on data-bases inapplicable.” Boing Boing reports:

This is data that we will need to evaluate the safety of autonomous vehicles, to fine-tune their performance, to ensure that they are working as the manufacturer claims — data that will not be public domain (as copyright law dictates), but will instead be someone’s exclusive purview, to release or withhold as they see fit. Who will own this data? It’s unlikely that it will be the owners of the vehicles.

It’s already the case that most auto manufacturers use license agreements and DRM to lock up your car so that you can’t fix it yourself or take it to an independent service center. The aggregated data from millions of self-driving cars across the EU aren’t just useful to public safety analysts, consumer rights advocates, security researchers and reviewers (who would benefit from this data living in the public domain) — it is also a potential gold-mine for car manufacturers who could sell it to insurers, market researchers and other deep-pocketed corporate interests who can profit by hiding that data from the public who generate it and who must share their cities and streets with high-speed killer robots.

644

Instagram is testing the ability to share your precise location history with Facebook

Revealed just weeks after Instagram’s co-founders left the company, Instagram is currently testing a feature that would allow it to share your location data with Facebook, even when you’re not using the app.

Instagram is not the only service that Facebook has sought to share data between. Back in 2016 the company announced that it would be sharing user data between WhatsApp and Facebook in order to offer better friend suggestions. The practice was later halted in the European Union thanks to its GDPR legislation, although WhatsApp’s CEO and co-founder later left over data privacy concerns.

Facebook is also reportedly testing a map view to see friend’s locations, similar to what’s already offered by Snapchat. Instagram’s data sharing could provide additional data points to power this functionality, while providing Facebook with more data to better target its ads.

696

Proposed Toronto development from Google’s Sidewalk Labs sparks concerns over data

Heated streets will melt ice and snow on contact. Sensors will monitor traffic and protect pedestrians. Driverless shuttles will carry people to their doors.

A unit of Google’s parent company Alphabet is proposing to turn a rundown part of Toronto’s waterfront into what may be the most wired community in history — to “fundamentally refine what urban life can be.”

Dan Doctoroff, the CEO of Sidewalk Labs, envisions features like pavement that lights up to warn pedestrians of approaching streetcars. Flexible heated enclosures — described as “raincoats” for buildings — will be deployed based on weather data during Toronto’s bitter winters. Robotic waste-sorting systems will detect when a garbage bin is full and remove it before raccoons descend.

“Those are great uses of data that can improve the quality of life of people,′ he said. “That’s what we want to do.”

But some Canadians are rethinking the privacy implications of giving one of the most data-hungry companies on the planet the means to wire up everything from street lights to pavement.

The concerns have intensified following a series of privacy scandals at Facebook and Google. A recent Associated Press investigation found that many Google services on iPhones and Android devices store location-tracking data even if you use privacy settings that are supposed to turn them off.

Adam Vaughan, the federal lawmaker whose district includes the development, said debate about big data and urban infrastructure is coming to cities across the world and he would rather have Toronto at the forefront of discussion.

“Google is ahead of governments globally and locally. That’s a cause for concern but it’s also an opportunity,” Vaughan said.

666

Facebook, Google, and Microsoft Use Design to Trick You Into Handing Over Your Data, New Report Warns

A study from the Norwegian Consumer Council dug into the underhanded tactics used by Microsoft, Facebook, and Google to collect user data. “The findings include privacy intrusive default settings, misleading wording, giving users an illusion of control, hiding away privacy-friendly choices, take-it-or-leave-it choices, and choice architectures where choosing the privacy friendly option requires more effort for the users,” states the report, which includes images and examples of confusing design choices and strangely worded statements involving the collection and use of personal data.

Google makes opting out of personalized ads more of a chore than it needs to be and uses multiple pages of text, unclear design language, and, as described by the report, “hidden defaults” to push users toward the company’s desired action. “If the user tried to turn the setting off, a popup window appeared explaining what happens if Ads Personalization is turned off, and asked users to reaffirm their choice,” the report explained. “There was no explanation about the possible benefits of turning off Ads Personalization, or negative sides of leaving it turned on.” Those who wish to completely avoid personalized ads must traverse multiple menus, making that “I agree” option seem like the lesser of two evils.

In Windows 10, if a user wants to opt out of “tailored experiences with diagnostic data,” they have to click a dimmed lightbulb, while the symbol for opting in is a brightly shining bulb, says the report.

Another example has to do with Facebook. The social media site makes the “Agree and continue” option much more appealing and less intimidating than the grey “Manage Data Settings” option. The report says the company-suggested option is the easiest to use. “This ‘easy road’ consisted of four clicks to get through the process, which entailed accepting personalized ads from third parties and the use of face recognition. In contrast, users who wanted to limit data collection and use had to go through 13 clicks.”

690

Facebook gave firms broad access to data on users, friends

Facebook reportedly formed data-sharing partnerships with dozens of device makers, including Apple and Samsung, giving them access to information on users, as well as on users’ friends.

The New York Times revealed the extent of the partnerships on Sunday, shedding new light on the social media giant’s behavior related to customer data following a scandal involving the political consulting firm Cambridge Analytica.

The Times found that the company made at least 60 such deals over the past decade, many of which are still in effect, allowing the other companies access to personal data of Facebook users and their friends.

The partnerships may have also violated a 2011 Federal Trade Commission (FTC) consent decree, according to the Times, which Facebook officials denied.

The report comes as Facebook is under scrutiny for its handling of private data after it was revealed that Cambridge Analytica accessed millions of users’ private information.

The partnerships allowed companies like Apple, Blackberry and Amazon to offer users Facebook features, like the ability to post photos, directly from a device without using the Facebook app.

The Times found that the partnerships allowed outside companies to access personal user data like relationship status, religious and political affiliations, work history and birthdays, as well as the information of users’ Facebook friends, even if the friends had blocked Facebook from sharing their information with third parties.

Facebook officials told the Times in interviews that the data-sharing partnerships were different from app developers’ access to Facebook users, and that the device makers are considered “extensions” of the social network.

But security experts and former Facebook engineers expressed concerns that the partnerships offered companies practically unfettered access to hundreds of thousands of Facebook users without their knowledge.

“It’s like having door locks installed, only to find out that the locksmith also gave keys to all of his friends so they can come in and rifle through your stuff without having to ask you for permission,” said Ashkan Soltani, a former FTC chief technologist, according to the Times.

Facebook began ending the partnerships in recent months, but the Times reported that many are still in effect.

678

How the “Math Men” Overthrew the “Mad Men”

Once, Mad Men ruled advertising. They’ve now been eclipsed by Math Men — the engineers and data scientists whose province is machines, algorithms, pureed data, and artificial intelligence. Yet Math Men are beleaguered, as Mark Zuckerberg demonstrated when he humbled himself before Congress, in April. Math Men’s adoration of data — coupled with their truculence and an arrogant conviction that their ‘science’ is nearly flawless — has aroused government anger, much as Microsoft did two decades ago.

The power of Math Men is awesome. Google and Facebook each has a market value exceeding the combined value of the six largest advertising and marketing holding companies. Together, they claim six out of every ten dollars spent on digital advertising, and nine out of ten new digital ad dollars. They have become more dominant in what is estimated to be an up to two-trillion-dollar annual global advertising and marketing business. Facebook alone generates more ad dollars than all of America’s newspapers, and Google has twice the ad revenues of Facebook.

745

Why the Facebook ‘scandal’ impacts you more than you think

It’s not just the data you choose to share.

By now we all know the story: Facebook allowed apps on its social media platform which enabled a shady outfit called Cambridge Analytica to scrape the profiles of 87 million users, in order to serve up targeted ads to benefit the Trump election campaign in 2016.  More than 300,000 Australian users of Facebook were caught up in the data harvesting.

But serving up ads in a foreign election campaign is not the whole story.  Facebook, and other companies involved in data mining, are invading our privacy and harming us economically and socially, in ways that are only just starting to become clear.

And it’s not just the data you choose to share. The information you post is not the whole story.  It’s only the tip of the iceberg of data that Facebook has collected about you.

Every time you go online you leave a trail of digital breadcrumbs.  Facebook has been busily sweeping up those breadcrumbs, and using them to categorise and profile you.  Facebook obviously knows when you click on a Facebook ‘like’ button; but also, unless a web developer has gone out of their way to find tools to block them (as we have done for our Salinger Privacy blog), Facebook knows every time you simply look at a website that has a Facebook ‘like’ button somewhere on it.

So if you only post or ‘like’ stories about inspirational mountain climbers and funny cat videos, but also do things online that you don’t share with your family, friends or work colleagues (like looking at stories about abortion or dealing with infidelity, Googling how to manage anxiety or erectile dysfunction, whingeing about your employer in a chatroom, or spending hours reviewing dating profiles, gambling or shopping obsessively for shoes)  — Facebook has you pegged anyway.

Plus, Facebook obtains data from other sources which know about your offline purchases, to build an even richer picture of who you really are.  And of course, Facebook may have access to your address book, your location history, the contents of your private messages, and depending on your brand of phone, possibly even a history of your phone calls and text messages.

All that information is used to draw inferences and assumptions about your preferences, and predict your likely behaviour.  The results are then used to categorise, profile and ultimately target you, in a process usually described as ‘online behavioural advertising’.

It’s not ‘just ads’

The objective of online behavioural advertising is to predict your purchasing interests and drive a purchase decision.  So far, the same as any other advertising.  But online, the implications for us as individuals are much greater.

Facebook’s promise to advertisers is that it can show their ad to exactly who the advertiser wants, and exclude everybody else.

However, by allowing exclusion, the platform also allows discrimination.  Facebook has been caught allowing advertisers to target — and exclude — people on the basis of their ‘ethnic affinity’, amongst other social, demographic, racial and religious characteristics.  So a landlord with an ad for rental housing could prevent people profiled as ‘single mothers’ from ever seeing their ad.  An employer could prevent people identifying as Jewish from seeing a job ad.  A bank could prevent people categorised as African Americans from seeing an ad for a home loan.

Existing patterns of social exclusion, economic inequality and discrimination are further entrenched by micro-targeted advertising, which is hidden from public view and regulatory scrutiny.

Data boy. Mark Zuckerberg testifies in Washington. Image: Getty.

Predictive analytics can narrow or alter your life choices

Once we move beyond straight-up advertising and into predictive analytics, the impact on individual autonomy becomes more acute.  Big Data feeds machine learning, which finds patterns in the data, from which new rules (algorithms) are designed.  Algorithms predict how a person will behave, and suggest how they should be treated.

Algorithms can lead to price discrimination, like surge pricing based on Uber knowing how much phone battery life you have left.  Or market exclusion, like Woolworths only offering car insurance to customers it has decided are low risk, based on an assessment of the groceries they buy.

Banks have been predicting the risk of a borrower defaulting on a loan for decades, but now algorithms are also used to determine who to hire, predict when a customer is pregnant, and deliver targeted search results to influence how you vote.

Algorithms are also being used to predict the students at risk of failure, the prisoners at risk of re-offending, and who is at risk of suicide and then launching interventions accordingly.  However, even leaving aside the accuracy of those predictions, interventions are not necessarily well-intentioned.  It was revealed last year that Australian Facebook executives were touting to advertisers their ability to target psychologically vulnerable teenagers. 

Automated decision-making diminishes our autonomy, by narrowing or altering our market and life choices, in ways that are not clear to us.  People already in a position of economic or social disadvantage face the additional challenge of trying to disprove or beat an invisible algorithm.

In a predictive and pre-emptive world, empathy, forgiveness, rehabilitation, redemption, individual dignity, autonomy and free will are programmed out of our society.

Fiddling with users’ privacy settings on Facebook won’t fix anything.  If we want our lives to be ruled by human values and individual dignity, instead of by machines fed on questionable data, we need robust, enforced and globally effective privacy laws.

A new European privacy law commences later this month.  The obligations include that businesses and governments must offer understandable explanations of how their algorithms work, and allow people to seek human review of automated decision-making.  This is a step in the right direction, which Australia, the US and the rest of the world should follow.

816

Google hasn’t stopped reading your e-mails

If you’re a Gmail user, your messages and emails likely aren’t as private as you’d think. Google reads each and every one (even if you definitely don’t), scanning your painfully long email chains and vacation responders in order to collect more data on you. Google uses the data gleaned from your messages in order to inform a whole host of other products and services, NBC News reported Thursday.

Though Google announced that it would stop using consumer Gmail content for ad personalization last July, the language permitting it to do so is still included in its current privacy policy, and it without a doubt still scans users emails for other purposes. Aaron Stein, a Google spokesperson, told NBC that Google also automatically extracts keyword data from users’ Gmail accounts, which is then fed into machine learning programs and other products within the Google family. Stein told NBC that Google also “may analyze [email] content to customize search results, better detect spam and malware,” a practice the company first announced back in 2012.

“We collect information about the services that you use and how you use them…” says Google’s privacy policy. “This includes information like your usage data and preferences, Gmail messages, G+ profile, photos, videos, browsing history, map searches, docs, or other Google-hosted content. Our automated systems analyze this information as it is sent and received and when it is stored.”

While Google doesn’t sell this information to third parties, has used it to power its own advertising network and inform search results, among other things. And this is far from a closely guarded secret. The company has included disclosures relating to these practices in its privacy policy since at least 2012: “When you share information with us, for example by creating a Google Account, we can make those services even better – to show you more relevant search results and ads…,” says Google’s March 2012 privacy policy.

707