Resources

Google’s Secret ‘Project Nightingale’ Gathers Personal Health Data on Millions of Americans

Google is teaming with one of the country’s largest health-care systems on a secret project to collect and crunch the detailed personal health information of millions of Americans across 21 states, WSJ reported Monday, citing people familiar with the matter and internal documents.

The initiative, code-named “Project Nightingale,” appears to be the largest in a series of efforts by Silicon Valley giants to gain access to personal health data and establish a toehold in the massive health-care industry. Amazon.com, Apple and Microsoft are also aggressively pushing into health care, though they haven’t yet struck deals of this scope. Google launched the effort last year with St. Louis-based Ascension, the country’s second-largest health system. The data involved in Project Nightingale includes lab results, doctor diagnoses and hospitalization records, among other categories, and amounts to a complete health history, complete with patient names and dates of birth.

Neither patients nor doctors have been notified. At least 150 Google employees already have access to much of the data on tens of millions of patients, according to a person familiar with the matter and the documents.

Google in this case is using the data in part to design new software, underpinned by advanced artificial intelligence and machine learning.

Google appears to be sharing information within Project Nightingale more broadly than in its other forays into health-care data. In September, Google announced a 10-year deal with the Mayo Clinic to store the hospital system’s genetic, medical and financial records.

Google co-founder Larry Page, in a 2014 interview, suggested that patients worried about the privacy of their medical records were too cautious. Mr. Page said: “We’re not really thinking about the tremendous good that can come from people sharing information with the right people in the right ways.”

Facebook gave firms broad access to data on users, friends

Facebook reportedly formed data-sharing partnerships with dozens of device makers, including Apple and Samsung, giving them access to information on users, as well as on users’ friends.

The New York Times revealed the extent of the partnerships on Sunday, shedding new light on the social media giant’s behavior related to customer data following a scandal involving the political consulting firm Cambridge Analytica.

The Times found that the company made at least 60 such deals over the past decade, many of which are still in effect, allowing the other companies access to personal data of Facebook users and their friends.

The partnerships may have also violated a 2011 Federal Trade Commission (FTC) consent decree, according to the Times, which Facebook officials denied.

The report comes as Facebook is under scrutiny for its handling of private data after it was revealed that Cambridge Analytica accessed millions of users’ private information.

The partnerships allowed companies like Apple, Blackberry and Amazon to offer users Facebook features, like the ability to post photos, directly from a device without using the Facebook app.

The Times found that the partnerships allowed outside companies to access personal user data like relationship status, religious and political affiliations, work history and birthdays, as well as the information of users’ Facebook friends, even if the friends had blocked Facebook from sharing their information with third parties.

Facebook officials told the Times in interviews that the data-sharing partnerships were different from app developers’ access to Facebook users, and that the device makers are considered “extensions” of the social network.

But security experts and former Facebook engineers expressed concerns that the partnerships offered companies practically unfettered access to hundreds of thousands of Facebook users without their knowledge.

“It’s like having door locks installed, only to find out that the locksmith also gave keys to all of his friends so they can come in and rifle through your stuff without having to ask you for permission,” said Ashkan Soltani, a former FTC chief technologist, according to the Times.

Facebook began ending the partnerships in recent months, but the Times reported that many are still in effect.

Google plan for data-driven ‘smart city’ sparks privacy, democracy concerns

In the Canadian city of Toronto, city officials are negotiating a project that will give a section of the city’s waterfront to the US tech giant Google. Under the arrangement, Google affiliate Sidewalk Labs will build and run a high-tech “radical mixed-use” site called Quayside. This “smart city” plan involves creating a neighbourhood “from the internet up”, powered and served by data; with sensors monitoring everything from air quality to pedestrian traffic, even the flushing of toilets. Amenities like garbage disposal and goods delivery are to be coordinated and driven by AI and robotics.

The proposed parcel of land isn’t huge, but it’s not insubstantial either – it covers about half-a-square-kilometre, and there are already suggestions it could be extended.

For Eric Schmidt, executive chairman of Alphabet — the parent company of both Google and Sidewalk Labs — it’s the culmination of a long-held ambition.

“Give us a city and put us in charge,” he once famously declared.

Following the Facebook/Cambridge Analytica scandal, some, like Dr Jathan Sadowski at the University of Sydney, worry about the implications of putting a private tech company in charge of both urban development and urban life.

“What’s in it for them? It’s data,” he says. “It allows them to get really massive amounts of granular data about urban life and urban environments.”

“You’ll have a city that’s based on, or built around, proprietary platforms, data harvesting, corporate control.”

Why the Facebook ‘scandal’ impacts you more than you think

It’s not just the data you choose to share.

By now we all know the story: Facebook allowed apps on its social media platform which enabled a shady outfit called Cambridge Analytica to scrape the profiles of 87 million users, in order to serve up targeted ads to benefit the Trump election campaign in 2016.  More than 300,000 Australian users of Facebook were caught up in the data harvesting.

But serving up ads in a foreign election campaign is not the whole story.  Facebook, and other companies involved in data mining, are invading our privacy and harming us economically and socially, in ways that are only just starting to become clear.

And it’s not just the data you choose to share. The information you post is not the whole story.  It’s only the tip of the iceberg of data that Facebook has collected about you.

Every time you go online you leave a trail of digital breadcrumbs.  Facebook has been busily sweeping up those breadcrumbs, and using them to categorise and profile you.  Facebook obviously knows when you click on a Facebook ‘like’ button; but also, unless a web developer has gone out of their way to find tools to block them (as we have done for our Salinger Privacy blog), Facebook knows every time you simply look at a website that has a Facebook ‘like’ button somewhere on it.

So if you only post or ‘like’ stories about inspirational mountain climbers and funny cat videos, but also do things online that you don’t share with your family, friends or work colleagues (like looking at stories about abortion or dealing with infidelity, Googling how to manage anxiety or erectile dysfunction, whingeing about your employer in a chatroom, or spending hours reviewing dating profiles, gambling or shopping obsessively for shoes)  — Facebook has you pegged anyway.

Plus, Facebook obtains data from other sources which know about your offline purchases, to build an even richer picture of who you really are.  And of course, Facebook may have access to your address book, your location history, the contents of your private messages, and depending on your brand of phone, possibly even a history of your phone calls and text messages.

All that information is used to draw inferences and assumptions about your preferences, and predict your likely behaviour.  The results are then used to categorise, profile and ultimately target you, in a process usually described as ‘online behavioural advertising’.

It’s not ‘just ads’

The objective of online behavioural advertising is to predict your purchasing interests and drive a purchase decision.  So far, the same as any other advertising.  But online, the implications for us as individuals are much greater.

Facebook’s promise to advertisers is that it can show their ad to exactly who the advertiser wants, and exclude everybody else.

However, by allowing exclusion, the platform also allows discrimination.  Facebook has been caught allowing advertisers to target — and exclude — people on the basis of their ‘ethnic affinity’, amongst other social, demographic, racial and religious characteristics.  So a landlord with an ad for rental housing could prevent people profiled as ‘single mothers’ from ever seeing their ad.  An employer could prevent people identifying as Jewish from seeing a job ad.  A bank could prevent people categorised as African Americans from seeing an ad for a home loan.

Existing patterns of social exclusion, economic inequality and discrimination are further entrenched by micro-targeted advertising, which is hidden from public view and regulatory scrutiny.

Data boy. Mark Zuckerberg testifies in Washington. Image: Getty.

Predictive analytics can narrow or alter your life choices

Once we move beyond straight-up advertising and into predictive analytics, the impact on individual autonomy becomes more acute.  Big Data feeds machine learning, which finds patterns in the data, from which new rules (algorithms) are designed.  Algorithms predict how a person will behave, and suggest how they should be treated.

Algorithms can lead to price discrimination, like surge pricing based on Uber knowing how much phone battery life you have left.  Or market exclusion, like Woolworths only offering car insurance to customers it has decided are low risk, based on an assessment of the groceries they buy.

Banks have been predicting the risk of a borrower defaulting on a loan for decades, but now algorithms are also used to determine who to hire, predict when a customer is pregnant, and deliver targeted search results to influence how you vote.

Algorithms are also being used to predict the students at risk of failure, the prisoners at risk of re-offending, and who is at risk of suicide and then launching interventions accordingly.  However, even leaving aside the accuracy of those predictions, interventions are not necessarily well-intentioned.  It was revealed last year that Australian Facebook executives were touting to advertisers their ability to target psychologically vulnerable teenagers. 

Automated decision-making diminishes our autonomy, by narrowing or altering our market and life choices, in ways that are not clear to us.  People already in a position of economic or social disadvantage face the additional challenge of trying to disprove or beat an invisible algorithm.

In a predictive and pre-emptive world, empathy, forgiveness, rehabilitation, redemption, individual dignity, autonomy and free will are programmed out of our society.

Fiddling with users’ privacy settings on Facebook won’t fix anything.  If we want our lives to be ruled by human values and individual dignity, instead of by machines fed on questionable data, we need robust, enforced and globally effective privacy laws.

A new European privacy law commences later this month.  The obligations include that businesses and governments must offer understandable explanations of how their algorithms work, and allow people to seek human review of automated decision-making.  This is a step in the right direction, which Australia, the US and the rest of the world should follow.

Stare Into The Lights My Pretties

Surveillance tools for “War on Terror” used on indigenous activists

“A shadowy international mercenary and security firm known as TigerSwan targeted the movement opposed to the Dakota Access Pipeline with military-style counterterrorism measures, collaborating closely with police in at least five states, according to internal documents obtained by The Intercept. The documents provide the first detailed picture of how TigerSwan, which originated as a U.S. military and State Department contractor helping to execute the global war on terror, worked at the behest of its client Energy Transfer Partners, the company building the Dakota Access Pipeline, to respond to the indigenous-led movement that sought to stop the project.

TigerSwan spearheaded a multifaceted private security operation characterized by sweeping and invasive surveillance of protesters.

Activists on the ground were tracked by a Dakota Access helicopter that provided live video coverage to their observers in police agencies, according to an October 12 email thread that included officers from the FBI, DHS, BIA, state, and local police. In one email, National Security Intelligence Specialist Terry Van Horn of the U.S. attorney’s office acknowledged his direct access to the helicopter video feed, which was tracking protesters’ movements during a demonstration. “Watching a live feed from DAPL Helicopter, pending arrival at site(s),” he wrote. Cecily Fong, a spokesperson for law enforcement throughout the protests, acknowledged that an operations center in Bismarck had access to the feed, stating in an email to The Intercept that “the video was provided as a courtesy so we had eyes on the situation.”

Stare Into The Lights My Pretties

CIA’s “Siren Servers” can predict social uprisings several days before they happen

“The CIA claims to be able to predict social unrest days before it happens thanks to powerful super computers dubbed Siren Servers by the father of Virtual Reality, Jaron Lanier.

CIA Deputy Director for Digital Innovation Andrew Hallman announced that the agency has beefed-up its “anticipatory intelligence” through the use of deep learning and machine learning servers that can process an incredible amount of data.

“We have, in some instances, been able to improve our forecast to the point of being able to anticipate the development of social unrest and societal instability some I think as near as three to five days out,” said Hallman on Tuesday at the Federal Tech event, Fedstival.

This Minority Report-type technology has been viewed skeptically by policymakers as the data crunching hasn’t been perfected, and if policy were to be enacted based on faulty data, the results could be disastrous. Iraq WMDs?”

I called it a siren server because there’s no plan to be evil. A siren server seduces you,” said Lanier.

In the case of the CIA; however, whether the agency is being innocently seduced or is actively planning to use this data for its own self-sustaining benefit, one can only speculate.

Given the Intelligence Community’s track record for toppling governments, infiltrating the mainstream media, MK Ultra, and scanning hundreds of millions of private emails, that speculation becomes easier to justify.”

Welcome to the age of the chatbot. Soon you’ll be lonelier than ever.

“Very soon – by the end of the year, probably – you won’t need to be on Facebook in order to talk to your friends on Facebook.

Your Facebook avatar will dutifully wish people happy birthday, congratulate them on the new job, accept invitations, and send them jolly texts punctuated by your favourite emojis – all while you’re asleep, or shopping, or undergoing major surgery.

Using IBM’s powerful Watson natural language processing platform, The Chat Bot Club learns to imitate its user. It learns texting styles, favourite phrases, preferred emojis, repeated opinions – and then it learns to respond in kind, across an ever-broadening range of subjects.”

“Humans aren’t perfect, and AI is a bit the same way,” he said. “AI is not significantly smarter than the people who program it. So AI is always going to encounter circumstances that it was not prepared for.”

Marketers hungry for data from wearable devices

“In the future the data procured from smartwatches might be much more valuable than what is currently available from laptop and mobile users,” reports David Curry, raising the possibility that stores might someday use your past Google searches to alert you when they’re selling a cheaper product.”

What Stores See When They Spy on Shoppers