Resources

Next in Google’s Quest for Consumer Dominance–Banking

The project, code-named Cache, is expected to launch next year with accounts run by Citigroup and a credit union at Stanford University, a tiny lender in Google’s backyard. Big tech companies see financial services as a way to get closer to users and glean valuable data. Apple introduced a credit card this summer. Amazon.com has talked to banks about offering checking accounts. Facebook is working on a digital currency it hopes will upend global payments. Their ambitions could challenge incumbent financial-services firms, which fear losing their primacy and customers. They are also likely to stoke a reaction in Washington, where regulators are already investigating whether large technology companies have too much clout.

The tie-ups between banking and technology have sometimes been fraught. Apple irked its credit-card partner, Goldman Sachs Group, by running ads that said the card was “designed by Apple, not a bank.” Major financial companies dropped out of Facebook’s crypto project after a regulatory backlash. Google’s approach seems designed to make allies, rather than enemies, in both camps. The financial institutions’ brands, not Google’s, will be front-and-center on the accounts, an executive told The Wall Street Journal. And Google will leave the financial plumbing and compliance to the banks — activities it couldn’t do without a license anyway.

635

Google’s Secret ‘Project Nightingale’ Gathers Personal Health Data on Millions of Americans

Google is teaming with one of the country’s largest health-care systems on a secret project to collect and crunch the detailed personal health information of millions of Americans across 21 states, WSJ reported Monday, citing people familiar with the matter and internal documents.

The initiative, code-named “Project Nightingale,” appears to be the largest in a series of efforts by Silicon Valley giants to gain access to personal health data and establish a toehold in the massive health-care industry. Amazon.com, Apple and Microsoft are also aggressively pushing into health care, though they haven’t yet struck deals of this scope. Google launched the effort last year with St. Louis-based Ascension, the country’s second-largest health system. The data involved in Project Nightingale includes lab results, doctor diagnoses and hospitalization records, among other categories, and amounts to a complete health history, complete with patient names and dates of birth.

Neither patients nor doctors have been notified. At least 150 Google employees already have access to much of the data on tens of millions of patients, according to a person familiar with the matter and the documents.

Google in this case is using the data in part to design new software, underpinned by advanced artificial intelligence and machine learning.

Google appears to be sharing information within Project Nightingale more broadly than in its other forays into health-care data. In September, Google announced a 10-year deal with the Mayo Clinic to store the hospital system’s genetic, medical and financial records.

Google co-founder Larry Page, in a 2014 interview, suggested that patients worried about the privacy of their medical records were too cautious. Mr. Page said: “We’re not really thinking about the tremendous good that can come from people sharing information with the right people in the right ways.”

650

With 5G, you won’t just be watching video. It’ll be watching you, too

What happens when movies can direct themselves? Remember the last time you felt terrified during a horror movie? Take that moment, and all the suspense leading up to it, and imagine it individually calibrated for you. It’s a terror plot morphing in real time, adjusting the story to your level of attention to lull you into a comfort zone before unleashing a personally timed jumpscare.

Or maybe being scared witless isn’t your idea of fun. Think of a rom-com that stops from going off the rails when it sees you rolling your eyes. Or maybe it tweaks the eye color of that character finally finding true love so it’s closer to your own, a personalized subtlety to make the love-struck protagonist more relatable.

You can thank (or curse) 5G for that.

When most people think of 5G, they’re envisioning an ultra-fast, high-bandwidth connection that lets you download seasons of your favorite shows in minutes. But 5G’s possibilities go way beyond that, potentially reinventing how we watch video, and opening up a mess of privacy uncertainties.

“Right now you make a video much the same way you did for TV,” Dan Garraway, co-founder of interactive video company Wirewax, said in an interview this month. “The dramatic thing is when you turn video into a two-way conversation. Your audience is touching and interacting inside the experience and making things happen as a result.” The personalized horror flick or tailored rom-com? They would hinge on interactive video layers that use emotional analysis based on your phone’s front-facing camera to adjust what you’re watching in real time. You may think it’s far-fetched, but one of key traits of 5G is an ultra-responsive connection with virtually no lag, meaning the network and systems would be fast enough to react to your physical responses.

Before you cast a skeptical eye at 5G, consider how the last explosion of mobile connectivity, from 3G to 4G LTE, changed how we consumed video. Being able to watch — and in YouTube’s case, upload — video on a mobile device reimagined how we watch TV and the types of programming that are big business. A decade ago, when Netflix was about two years into its transition to streaming from DVD mailings, its annual revenue $1.4 billion. This year it’s on track for more than 10 times that ($15.806 billion).

5G’s mobility can bring video experiences to new locations. Spare gives the example straight out of Minority Report, of entering a Gap retail store and being greeted by name. But taken further, the store could develop a three-dimensional video concierge for your phone — a pseudo-hologram that helps you find what you’re looking for. With 5G’s ability to make virtual and augmented reality more accessible, you could get a snapshot of what an outfit might look like on you without having to try it on.

Where things get crazy — and creepy — is imagining how 5G enables video to react to your involuntary cues and all the data you unconsciously provide. A show could mimic the weather or time of day to more closely match the atmosphere in real life.

For all the eye-popping possibilities, 5G unleashes a tangle of privacy questions. 5G could leverage every piece of visual information a phone can see on cameras front and back in real time. This level of visual imagery collection could pave the way for video interaction to happen completely automatically.

It’s also a potential privacy nightmare. But the lure of billions of dollars have already encouraged companies to make privacy compromises.

And that may make it feel like your personalized horror show is already here.

739

Facebook is not alone in making everyone’s data available for whatever purpose

Most companies that trade in the sale and manipulation of personal information are private and beholden to few rules other than the bare minimum of those they establish themselves, to avoid scrutiny and be able to say “we told you so” if an angry individual ever comes calling. Even if a consumer is aware their data is being passed around, their ability to control it once it’s out there is virtually nil: if they request it be deleted from one data broker, it can simply be bought back from from one of several gigantic firms that have been storing it, too.

It is an open question what the actual effect of Cambridge Analytica’s work on the presidential election was, and what the outcome might have been without its influence (most references to its “psychographic” profiling in The New York Times’ story are appropriately skeptical). It would be hard to say without a lot more cooperation from the company and Facebook itself. But the leak by one of its researchers is an incredibly rare glimpse into a fairly routine process in an industry that is so staggeringly enormous and influential, not just in politics but in our personal, day-to-day existence, that it’s difficult to believe that it is anything but a mistake. But it isn’t, and wasn’t, a mistake. It is how things happened and are still happening every day.

741

Why the Facebook ‘scandal’ impacts you more than you think

It’s not just the data you choose to share.

By now we all know the story: Facebook allowed apps on its social media platform which enabled a shady outfit called Cambridge Analytica to scrape the profiles of 87 million users, in order to serve up targeted ads to benefit the Trump election campaign in 2016.  More than 300,000 Australian users of Facebook were caught up in the data harvesting.

But serving up ads in a foreign election campaign is not the whole story.  Facebook, and other companies involved in data mining, are invading our privacy and harming us economically and socially, in ways that are only just starting to become clear.

And it’s not just the data you choose to share. The information you post is not the whole story.  It’s only the tip of the iceberg of data that Facebook has collected about you.

Every time you go online you leave a trail of digital breadcrumbs.  Facebook has been busily sweeping up those breadcrumbs, and using them to categorise and profile you.  Facebook obviously knows when you click on a Facebook ‘like’ button; but also, unless a web developer has gone out of their way to find tools to block them (as we have done for our Salinger Privacy blog), Facebook knows every time you simply look at a website that has a Facebook ‘like’ button somewhere on it.

So if you only post or ‘like’ stories about inspirational mountain climbers and funny cat videos, but also do things online that you don’t share with your family, friends or work colleagues (like looking at stories about abortion or dealing with infidelity, Googling how to manage anxiety or erectile dysfunction, whingeing about your employer in a chatroom, or spending hours reviewing dating profiles, gambling or shopping obsessively for shoes)  — Facebook has you pegged anyway.

Plus, Facebook obtains data from other sources which know about your offline purchases, to build an even richer picture of who you really are.  And of course, Facebook may have access to your address book, your location history, the contents of your private messages, and depending on your brand of phone, possibly even a history of your phone calls and text messages.

All that information is used to draw inferences and assumptions about your preferences, and predict your likely behaviour.  The results are then used to categorise, profile and ultimately target you, in a process usually described as ‘online behavioural advertising’.

It’s not ‘just ads’

The objective of online behavioural advertising is to predict your purchasing interests and drive a purchase decision.  So far, the same as any other advertising.  But online, the implications for us as individuals are much greater.

Facebook’s promise to advertisers is that it can show their ad to exactly who the advertiser wants, and exclude everybody else.

However, by allowing exclusion, the platform also allows discrimination.  Facebook has been caught allowing advertisers to target — and exclude — people on the basis of their ‘ethnic affinity’, amongst other social, demographic, racial and religious characteristics.  So a landlord with an ad for rental housing could prevent people profiled as ‘single mothers’ from ever seeing their ad.  An employer could prevent people identifying as Jewish from seeing a job ad.  A bank could prevent people categorised as African Americans from seeing an ad for a home loan.

Existing patterns of social exclusion, economic inequality and discrimination are further entrenched by micro-targeted advertising, which is hidden from public view and regulatory scrutiny.

Data boy. Mark Zuckerberg testifies in Washington. Image: Getty.

Predictive analytics can narrow or alter your life choices

Once we move beyond straight-up advertising and into predictive analytics, the impact on individual autonomy becomes more acute.  Big Data feeds machine learning, which finds patterns in the data, from which new rules (algorithms) are designed.  Algorithms predict how a person will behave, and suggest how they should be treated.

Algorithms can lead to price discrimination, like surge pricing based on Uber knowing how much phone battery life you have left.  Or market exclusion, like Woolworths only offering car insurance to customers it has decided are low risk, based on an assessment of the groceries they buy.

Banks have been predicting the risk of a borrower defaulting on a loan for decades, but now algorithms are also used to determine who to hire, predict when a customer is pregnant, and deliver targeted search results to influence how you vote.

Algorithms are also being used to predict the students at risk of failure, the prisoners at risk of re-offending, and who is at risk of suicide and then launching interventions accordingly.  However, even leaving aside the accuracy of those predictions, interventions are not necessarily well-intentioned.  It was revealed last year that Australian Facebook executives were touting to advertisers their ability to target psychologically vulnerable teenagers. 

Automated decision-making diminishes our autonomy, by narrowing or altering our market and life choices, in ways that are not clear to us.  People already in a position of economic or social disadvantage face the additional challenge of trying to disprove or beat an invisible algorithm.

In a predictive and pre-emptive world, empathy, forgiveness, rehabilitation, redemption, individual dignity, autonomy and free will are programmed out of our society.

Fiddling with users’ privacy settings on Facebook won’t fix anything.  If we want our lives to be ruled by human values and individual dignity, instead of by machines fed on questionable data, we need robust, enforced and globally effective privacy laws.

A new European privacy law commences later this month.  The obligations include that businesses and governments must offer understandable explanations of how their algorithms work, and allow people to seek human review of automated decision-making.  This is a step in the right direction, which Australia, the US and the rest of the world should follow.

921

‘Living laboratories’: the Dutch cities amassing data on oblivious residents

Stratumseind in Eindhoven is one of the busiest nightlife streets in the Netherlands. On a Saturday night, bars are packed, music blares through the street, laughter and drunken shouting bounces off the walls. As the night progresses, the ground becomes littered with empty shot bottles, energy drink cans, cigarette butts and broken glass.

It’s no surprise that the place is also known for its frequent fights. To change that image, Stratumseind has become one of the “smartest” streets in the Netherlands. Lamp-posts have been fitted with wifi-trackers, cameras and 64 microphones that can detect aggressive behaviour and alert police officers to altercations. There has been a failed experiment to change light intensity to alter the mood. The next plan, starting this spring, is to diffuse the smell of oranges to calm people down. The aim? To make Stratumseind a safer place.

We get that comment a lot – ‘Big brother is watching you’. I prefer to say, ‘Big brother is helping you’

All the while, data is being collected and stored. “Visitors do not realise they are entering a living laboratory,” says Maša Galic, a researcher on privacy in the public space for the Tilburg Institute of Law, Technology and Society. Since the data on Stratumseind is used to profile, nudge or actively target people, this “smart city” experiment is subject to privacy law. According to the Dutch Personal Data Protection Act, people should be notified in advance of data collection and the purpose should be specified – but in Stratumseind, as in many other “smart cities”, this is not the case.

Peter van de Crommert is involved at Stratumseind as project manager with the Dutch Institute for Technology, Safety and Security. He says visitors do not have to worry about their privacy: the data is about crowds, not individuals. “We often get that comment – ‘Big brother is watching you’ – but I prefer to say, ‘Big brother is helping you’. We want safe nightlife, but not a soldier on every street corner.”

When we think of smart cities, we usually think of big projects: Songdo in South Korea, the IBM control centre in Rio de Janeiro or the hundreds of new smart cities in India. More recent developments include Toronto, where Google will build an entirely new smart neighbourhood, and Arizona, where Bill Gates plans to build his own smart city. But the reality of the smart city is that it has stretched into the everyday fabric of urban life – particularly so in the Netherlands.

In the eastern city of Enschede, city traffic sensors pick up your phone’s wifi signal even if you are not connected to the wifi network. The trackers register your MAC address, the unique network card number in a smartphone. The city council wants to know how often people visit Enschede, and what their routes and preferred spots are. Dave Borghuis, an Enschede resident, was not impressed and filed an official complaint. “I don’t think it’s okay for the municipality to track its citizens in this way,” he said. “If you walk around the city, you have to be able to imagine yourself unwatched.”

Enschede is enthusiastic about the advantages of the smart city. The municipality says it is saving €36m in infrastructure investments by launching a smart traffic app that rewards people for good behaviour like cycling, walking and using public transport. (Ironically, one of the rewards is a free day of private parking.) Only those who mine the small print will discover that the app creates “personal mobility profiles”, and that the collected personal data belongs to the company Mobidot.
‘Targeted supervision’ in Utrecht

Companies are getting away with it in part because it involves new applications of data. In Silicon Valley, they call it “permissionless innovation”, they believe technological progress should not be stifled by public regulations. For the same reason, they can be secretive about what data is collected in a public space and what it is used for. Often the cities themselves don’t know.

Utrecht keeps track of the number of boys and girls hanging in the streets, their age and whether they are acquaintances

Utrecht has become a tangle of individual pilots and projects, with no central overview of how many cameras and sensors exist, nor what they do. In 2014, the city invested €80m in data-driven management that launched in 80 projects. Utrecht now has a burglary predictor, a social media monitoring room, and smart bins and smart streetlights with sensors (although the city couldn’t say where these are located). It has scanner cars that dispense parking tickets, with an added bonus of detecting residents with a municipal tax debt according to the privacy regulation of the scanner cars. But when I asked the city to respond to a series of questions on just 22 of the smart projects, it could only answer for five of them, referring me to private companies for the rest of the answers.

The city also keeps track of the number of young people hanging out in the streets, their age group, whether they know each other, the atmosphere and whether or not they cause a nuisance. Special enforcement officers keep track of this information through mobile devices. It calls this process “targeted and innovative supervision”. Other council documents mention the prediction of school drop-outs, the prediction of poverty and the monitoring of “the health of certain groups” with the aim of “intervening faster”.

Like many cities, Utrecht argues that it acts in accordance with privacy laws because it anonymises or pseudonymises data (assigning it a number instead of a name or address). But pseudonymised personal data is still personal data. “The process is not irreversible if the source file is stored,” says Mireille Hildebrandt, professor of ICT and Law at Radboud University. “Moreover, if you build personal profiles and act on them, it is still a violation of privacy and such profiling can – unintentionally – lead to discrimination.” She points to Utrecht’s plan to register the race and health data of prostitutes, which came in for heavy criticism from the Dutch Data Protection Authority.

Another unanswered question regards who owns data that is collected in a public space. Arjen Hof is director of Civity, a company that builds data platforms for governments. “Public authorities are increasingly outsourcing tasks to private companies. Think of waste removal or street lighting,” he says. “But they do not realise that at the same time a lot of data is collected, and do not always make agreements about the ownership of data.”
‘A smart city is a privatised city’

Hof gives the example of CityTec, a company that manages 2,000 car parks, 30,000 traffic lights and 500,000 lamp-posts across the Netherlands. It refused to share with municipalities the data it was collecting through its lamp-post sensors. “Their argument was that, although the municipality is legally owner of the lamp-posts, CityTec is the economic owner and, for competitive reasons, did not want to make the data available,” Hof says. This was three years ago, but for a lot of companies it remains standard practice. Companies dictate the terms, and cities say they can’t share the contracts because it contains “competition-sensitive information”.

When I interviewed the technology writer Evgeny Morozov in October, he warned of cities becoming too dependent on private companies. “The culmination of the smart city is a privatised city,” he said. “A city in which you have to pay for previously free services.”

Morozov’s fear about public subsidies being used for private innovation is well illustrated in Assen, a city of 70,000 people in the north of the country. Assen built a fibre-optic network for super-fast internet in 2011, to which it connected 200 sensors that measure, among other things, the flow of cars. There was an experiment to steer people around traffic jams, even though traffic in the city is relatively light. The city also connected its traffic lights, parking garages and parking signs to this grid. The cost of €46m was split between Brussels, the national government, the province and the municipality. Companies such as the car navigation firm TomTom have used the sensor network to test new services.

The project, called Sensor City, filed for bankruptcy a year ago. Now the publicly funded fibre-optic network, sensors and all, will be sold to a still-unidentified private company. The municipality will have to strike a deal with the new owner about the use of its public traffic lights and parking signs.

838

Facebook silently enables facial recognition abilities for users outside EU and Canada

Facebook is now informing users around the world that it’s rolling out facial recognition features. In December, we reported the features would be coming to the platform; that roll out finally appears to have begun. It should be noted that users in the European Union and Canada will not be notified because laws restrict this type of activity in those areas.

With the new tools, you’ll be able to find photos that you’re in but haven’t been tagged in; they’ll help you protect yourself against strangers using your photo; and Facebook will be able to tell people with visual impairments who’s in their photos and videos. By default, Facebook warns that this feature is enabled but can be switched off at any time; additionally, the firm says it may add new capabilities at any time.

While Facebook may want its users to “feel confident” uploading pictures online, it will likely give many other users the heebie-jeebies when they think of the colossal database of faces that Facebook has and what it could do with all that data. Even non-users should be cautious which photos they include themselves in if they don’t want to be caught up in Facebook’s web of data.

826

How Do You Vote? 50 Million Google Images Give a Clue

What vehicle is most strongly associated with Republican voting districts? Extended-cab pickup trucks. For Democratic districts? Sedans.

Those conclusions may not be particularly surprising. After all, market researchers and political analysts have studied such things for decades.

But what is surprising is how researchers working on an ambitious project based at Stanford University reached those conclusions: by analyzing 50 million images and location data from Google Street View, the street-scene feature of the online giant’s mapping service.

For the first time, helped by recent advances in artificial intelligence, researchers are able to analyze large quantities of images, pulling out data that can be sorted and mined to predict things like income, political leanings and buying habits. In the Stanford study, computers collected details about cars in the millions of images it processed, including makes and models.

Identifying so many car images in such detail was a technical feat. But it was linking that new data set to public collections of socioeconomic and environmental information, and then tweaking the software to spot patterns and correlations, that makes the Stanford project part of what computer scientists see as the broader application of image data.

806

12 Days In Xinjiang — China’s Surveillance State

Urumqi, China – This city on China’s Central Asia frontier may be one of the most closely surveilled places on earth.

Security checkpoints with identification scanners guard the train station and roads in and out of town. Facial scanners track comings and goings at hotels, shopping malls and banks. Police use hand-held devices to search smartphones for encrypted chat apps, politically charged videos and other suspect content. To fill up with gas, drivers must first swipe their ID cards and stare into a camera.

China’s efforts to snuff out a violent separatist movement by some members of the predominantly Muslim Uighur ethnic group have turned the autonomous region of Xinjiang, of which Urumqi is the capital, into a laboratory for high-tech social controls that civil-liberties activists say the government wants to roll out across the country.

It is nearly impossible to move about the region without feeling the unrelenting gaze of the government. Citizens and visitors alike must run a daily gantlet of police checkpoints, surveillance cameras and machines scanning their ID cards, faces, eyeballs and sometimes entire bodies.

When fruit vendor Parhat Imin swiped his card at a telecommunications office this summer to pay an overdue phone bill, his photo popped up with an “X.” Since then, he says, every scan of his ID card sets off an alarm. He isn’t sure what it signifies, but figures he is on some kind of government watch list because he is a Uighur and has had intermittent run-ins with the police.

He says he is reluctant to travel for fear of being detained. “They blacklisted me,” he says. “I can’t go anywhere.”

All across China, authorities are rolling out new technology to keep watch over people and shape their behavior. Controls on expression have tightened under President Xi Jinping, and the state’s vast security web now includes high-tech equipment to monitor online activity and even snoop in smartphone messaging apps.

China’s government has been on high alert since a surge in deadly terrorist attacks around the country in 2014 that authorities blamed on Xinjiang-based militants inspired by extremist Islamic messages from abroad. Now officials are putting the world’s most state-of-the-art tools in the hands of a ramped-up security force to create a system of social control in Xinjiang—one that falls heaviest on Uighurs.

At a security exposition in October, an executive of Guangzhou-based CloudWalk Technology Co., which has sold facial-recognition algorithms to police and identity-verification systems to gas stations in Xinjiang, called the region the world’s most heavily guarded place. According to the executive, Jiang Jun, for every 100,000 people the police in Xinjiang want to monitor, they use the same amount of surveillance equipment that police in other parts of China would use to monitor millions.

Authorities in Xinjiang declined to respond to questions about surveillance. Top party officials from Xinjiang said at a Communist Party gathering in Beijing in October that “social stability and long-term security” were the local government’s bottom-line goals.

Chinese and foreign civil-liberty activists say the surveillance in this northwestern corner of China offers a preview of what is to come nationwide.

“They constantly take lessons from the high-pressure rule they apply in Xinjiang and implement them in the east,” says Zhu Shengwu, a Chinese human-rights lawyer who has worked on surveillance cases. “What happens in Xinjiang has bearing on the fate of all Chinese people.”

During an October road trip into Xinjiang along a modern highway, two Wall Street Journal reporters encountered a succession of checkpoints that turned the ride into a strange and tense journey.

At Xingxing Gorge, a windswept pass used centuries ago by merchants plying the Silk Road, police inspected incoming traffic and verified travelers’ identities. The Journal reporters were stopped, ordered out of their car and asked to explain the purpose of their visit. Drivers, mostly those who weren’t Han Chinese, were guided through electronic gateways that scanned their ID cards and faces.

Farther along, at the entrance to Hami, a city of a half-million, police had the Journal reporters wait in front of a bank of TV screens showing feeds from nearby surveillance cameras while recording their passport numbers.

Surveillance cameras loomed every few hundred feet along the road into town, blanketed street corners and kept watch on patrons of a small noodle shop near the main mosque. The proprietress, a member of the Muslim Hui minority, said the government ordered all restaurants in the area to install the devices earlier this year “to prevent terrorist attacks.”

Days later, as the Journal reporters were driving on a dirt road in Shanshan county after being ordered by officials to leave a nearby town, a police cruiser materialized seemingly from nowhere. It raced past, then skidded to a diagonal stop, kicking up a cloud of dust and blocking the reporters’ car. An SUV pulled up behind. A half-dozen police ordered the reporters out of the car and demanded their passports.

An officer explained that surveillance cameras had read the out-of-town license plates and sent out an alert. “We check every car that’s not from Xinjiang,” he said. The police then escorted the reporters to the highway.

At checkpoints further west, iris and body scanners are added to the security arsenal.

Darren Byler, an anthropology researcher at the University of Washington who spent two years in Xinjiang studying migration, says the closest contemporary parallel can be found in the West Bank and Gaza Strip, where the Israeli government has created a system of checkpoints and biometric surveillance to keep tabs on Palestinians.

In Erdaoqiao, the neighborhood where the fruit vendor Mr. Imin lives, small booths known as “convenience police stations,” marked by flashing lights atop a pole, appear every couple of hundred yards. The police stationed there offer water, cellphone charging and other services, while also taking in feeds from nearby surveillance cameras.

Young Uighur men are routinely pulled into the stations for phone checks, leading some to keep two devices—one for home use and another, with no sensitive content or apps, for going out, according to Uighur exiles.

Erdaoqiao, the heart of Uighur culture and commerce in Urumqi, is where ethnic riots started in 2009 that resulted in numerous deaths. The front entrance to Erdaoqiao Mosque is now closed, as are most entries to the International Grand Bazaar. Visitors funnel through a heavily guarded main gate. The faces and ID cards of Xinjiang residents are scanned. An array of cameras keeps watch.

After the riots, authorities showed up to shut down the shop Mr. Imin was running at the time, which sold clothing and religious items. When he protested, he says, they clubbed him on the back of the head, which has left him walking with a limp. They jailed him for six months for obstructing official business, he says. Other jail stints followed, including eight months for buying hashish.

The police in Urumqi didn’t respond to requests for comment.

Mr. Imin now sells fruit and freshly squeezed pomegranate juice from a cart. He worries that his flagged ID card will bring the police again. Recently remarried, he hasn’t dared visit his new wife’s family in southern Xinjiang.

Chinese rulers have struggled for two millennia to control Xinjiang, whose 23 million people are scattered over an expanse twice the size of Texas. Beijing sees it as a vital piece of President Xi’s trillion-dollar “Belt and Road” initiative to build infrastructure along the old Silk Road trade routes to Europe.

Last year, Mr. Xi installed a new Xinjiang party chief, Chen Quanguo, who previously handled ethnic strife in Tibet, another hot spot. Mr. Chen pioneered the convenience police stations in that region, partly in response to a string of self-immolations by monks protesting Chinese rule.

Under Mr. Chen, the police presence in Xinjiang has skyrocketed, based on data showing exponential increases in police-recruitment advertising. Local police departments last year began ordering cameras capable of creating three-dimensional face images as well as DNA sequencers and voice-pattern analysis systems, according to government procurement documents uncovered by Human Rights Watch and reviewed by the Journal.

During the first quarter of 2017, the government announced the equivalent of more than $1 billion in security-related investment projects in Xinjiang, up from $27 million in all of 2015, according to research in April by Chinese brokerage firm Industrial Securities .

Government procurement orders show millions spent on “unified combat platforms”—computer systems to analyze surveillance data from police and other government agencies.

Tahir Hamut, a Uighur poet and filmmaker, says Uighurs who had passports were called in to local police stations in May. He worried he would draw extra scrutiny for having been accused of carrying sensitive documents, including newspaper articles about Uighur separatist attacks, while trying to travel to Turkey to study in the mid-1990s. The aborted trip landed him in a labor camp for three years, he says.

He and his wife lined up at a police station with other Uighurs to have their fingerprints and blood samples taken. He says he was asked to read a newspaper for two minutes while police recorded his voice, and to turn his head slowly in front of a camera.

Later, his family’s passports were confiscated. After a friend was detained by police, he says, he assumed he also would be taken away. He says he paid officials a bribe of more than $9,000 to get the passports back, making up a story that his daughter had epilepsy requiring treatment in the U.S. Xinjiang’s Public Security Bureau, which is in charge of the region’s police forces, didn’t respond to a request for comment about the bribery.

“The day we left, I was filled with anxiety,” he says. “I worried what would happen if we were stopped going through security at the Urumqi airport, or going through border control in Beijing.”

He and his family made it to Virginia, where they have applied for political asylum.

Chinese authorities use forms to collect personal information from Uighurs. One form reviewed by the Journal asks about respondents’ prayer habits and if they have contacts abroad. There are sections for officials to rate “persons of interest” on a six-point scale and check boxes on whether they are “safe,” “average” or “unsafe.”

China Communications Services Co. Ltd., a subsidiary of state telecom giant China Telecom , has signed contracts this year worth more than $38 million to provide mosque surveillance and install surveillance-data platforms in Xinjiang, according to government procurement documents. The company declined to discuss the contracts, saying they constituted sensitive business information.

Xiamen Meiya Pico Information Co. Ltd. worked with police in Urumqi to adapt a hand-held device it sells for investigating economic crimes so it can scan smartphones for terrorism-related content.

A description of the device that recently was removed from the company’s website said it can read the files on 90% of smartphones and check findings against a police antiterror database. “Mostly, you’re looking for audio and video,” said Zhang Xuefeng, Meiya Pico’s chief marketing officer, in an interview.

Near the Xinjiang University campus in Urumqi, police sat at a wooden table recently, ordering some people walking by to hand over their phones.

“You just plug it in and it shows you what’s on the phone,” said one officer, brandishing a device similar to the one on Meiya Pico’s website. He declined to say what content they were checking for.

One recent afternoon in Korla, one of Xinjiang’s largest cities, only a trickle of people passed through the security checkpoint at the local bazaar, where vendors stared at darkened hallways empty of shoppers.

Li Qiang, the Han Chinese owner of a wine shop, said the security checks, while necessary for safety, were getting in the way of commerce. “As soon as you go out, they check your ID,” he said.

Authorities have built a network of detention facilities, officially referred to as education centers, across Xinjiang. In April, the official Xinjiang Daily newspaper said more than 2,000 people had been sent to a “study and training center” in the southern city of Hotan.

One new compound sits a half-hour drive south of Kashgar, a Uighur-dominated city near the border with Kyrgyzstan. It is surrounded by imposing walls topped with razor wire, with watchtowers at two corners. A slogan painted on the wall reads: “All ethnic groups should be like the pods of a pomegranate, tightly wrapped together.”

Villagers describe it as a detention center. A man standing near the entrance one recent night said it was a school and advised reporters to leave.

Mr. Hamut, the poet, says a relative in Kashgar was taken to a detention center after she participated in an Islamic ceremony, and another went missing soon after the family tried to call him from the U.S.

The local government in Kashgar didn’t respond to a request for comment.

Surveillance in and around Kashgar, where Han Chinese make up less than 7% of the population, is even tighter than in Urumqi. Drivers entering the city are screened intensively. A machine scans each driver’s face. Police officers inspect the engine and the trunk. Passengers must get out and run their bags through X-ray machines.

In Aksu, a dusty city a five-hour drive east of Kashgar, knife salesman Jiang Qiankun says his shop had to pay thousands of dollars for a machine that turns a customer’s ID card number, photo, ethnicity and address into a QR code that it lasers into the blade of any knife it sells. “If someone has a knife, it has to have their ID card information,” he says.

On the last day the Journal reporters were in Xinjiang, an unmarked car trailed them on a 5 a.m. drive to the Urumqi airport. During their China Southern Airlines flight to Beijing, a flight attendant appeared to train a police-style body camera attached to his belt on the reporters. Later, as passengers were disembarking, the attendant denied filming them, saying it was common for airline crew to wear the cameras as a security measure.

China Southern says the crew member was an air marshal, charged with safety on board.

847

Facebook has mapped populations in 23 countries as it explores satellites to expand internet

“Facebook doesn’t only know what its 2 billion users “Like.” It now knows where millions of humans live, everywhere on Earth, to within 15 feet. The company has created a data map of the human population by combining government census numbers with information it’s obtained from space satellites, according to Janna Lewis, Facebook’s head of strategic innovation partnerships and sourcing. A Facebook representative later told CNBC that this map currently covers 23 countries, up from 20 countries mentioned in this blog post from February 2016.

The mapping technology, which Facebook says it developed itself, can pinpoint any man-made structures in any country on Earth to a resolution of five meters. Facebook is using the data to understand the precise distribution of humans around the planet. That will help the company determine what types of internet service — based either on land, in the air or in space — it can use to reach consumers who now have no (or very low quality) internet connections.”

735

“Are you happy now? The uncertain future of emotion analytics”

Elise Thomas writes at Hopes & Fears:

“Right now, in a handful of computing labs scattered across the world, new software is being developed which has the potential to completely change our relationship with technology. Affective computing is about creating technology which recognizes and responds to your emotions. Using webcams, microphones or biometric sensors, the software uses a person’s physical reactions to analyze their emotional state, generating data which can then be used to monitor, mimic or manipulate that person’s emotions.”

Corporations spend billions each year trying to build “authentic” emotional connections to their target audiences. Marketing research is one of the most prolific research fields around, conducting thousands of studies on how to more effectively manipulate consumers’ decision-making. Advertisers are extremely interested in affective computing and particularly in a branch known as emotion analytics, which offers unprecedented real-time access to consumers’ emotional reactions and the ability to program alternative responses depending on how the content is being received.

For example, if two people watch an advertisement with a joke and only one person laughs, the software can be programmed to show more of the same kind of advertising to the person who laughs while trying different sorts of advertising on the person who did not laugh to see if it’s more effective. In essence, affective computing could enable advertisers to create individually-tailored advertising en masse.”

“Say 15 years from now a particular brand of weight loss supplements obtains a particular girl’s information and locks on. When she scrolls through her Facebook, she sees pictures of rail-thin celebrities, carefully calibrated to capture her attention. When she turns on the TV, it automatically starts on an episode of “The Biggest Loser,” tracking her facial expressions to find the optimal moment for a supplement commercial. When she sets her music on shuffle, it “randomly” plays through a selection of the songs which make her sad. This goes on for weeks.

Now let’s add another layer. This girl is 14, and struggling with depression. She’s being bullied in school. Having become the target of a deliberate and persistent campaign by her technology to undermine her body image and sense of self-worth, she’s at risk of making some drastic choices.”

932
Stare Into The Lights My Pretties

The data analytics company Cambridge Analytica

The Guardian is running an article about a ‘mysterious’ big-data analytics company called Cambridge Analytica and its activities with SCL Group—a 25-year-old military psyops company in the UK later bought by “secretive hedge fund billionaire” Robert Mercer. In the article, a former employee calls it “this dark, dystopian data company that gave the world Trump.”

Mercer, with a background in computer science is alleged to be at the centre of a multimillion-dollar propaganda network.

“Facebook was the source of the psychological insights that enabled Cambridge Analytica to target individuals. It was also the mechanism that enabled them to be delivered on a large scale. The company also (perfectly legally) bought consumer datasets — on everything from magazine subscriptions to airline travel — and uniquely it appended these with the psych data to voter files… Finding “persuadable” voters is key for any campaign and with its treasure trove of data, Cambridge Analytica could target people high in neuroticism, for example, with images of immigrants “swamping” the country.

The key is finding emotional triggers for each individual voter. Cambridge Analytica worked on campaigns in several key states for a Republican political action committee. Its key objective, according to a memo the Observer has seen, was “voter disengagement” and “to persuade Democrat voters to stay at home”… In the U.S., the government is bound by strict laws about what data it can collect on individuals. But, for private companies anything goes.”

805

Facebook: Cracking the Code (2017)

“What’s on your mind?” It’s the friendly Facebook question which lets you share how you’re feeling. It’s also the question that unlocks the details of your life and helps turn your thoughts into profits.

Facebook has the ability to track much of your browsing history, even when you’re not logged on, and even if you aren’t a member of the social network at all. This is one of the methods used to deliver targeted advertising and ‘news’ to your Facebook feed. This is why you are unlikely to see anything that challenges your world view.

This feedback loop is fuelling the rise and power of ‘fake news’. “We’re seeing news that’s tailored ever more tightly towards those kinds of things that people will click on, and will share, rather than things that perhaps are necessarily good for them”, says one Media Analyst.

This information grants huge power to those with access to it. Republican Party strategist Patrick Ruffini says, “What it does give us is much greater level of certainty and granularity and precision down to the individual voter, down to the individual precinct about how things are going to go”. Resultantly, former Facebook journalist, Adam Schrader thinks that there’s “a legitimate argument to this that Facebook influenced the election, the United States Election results.

918
Stare Into The Lights My Pretties

How algorithms (secretly) run the world

“When you browse online for a new pair of shoes, pick a movie to stream on Netflix or apply for a car loan, an algorithm likely has its word to say on the outcome.

The complex mathematical formulas are playing a growing role in all walks of life: from detecting skin cancers to suggesting new Facebook friends, deciding who gets a job, how police resources are deployed, who gets insurance at what cost, or who is on a “no fly” list.

Algorithms are being used—experimentally—to write news articles from raw data, while Donald Trump’s presidential campaign was helped by behavioral marketers who used an algorithm to locate the highest concentrations of “persuadable voters.”

But while such automated tools can inject a measure of objectivity into erstwhile subjective decisions, fears are rising over the lack of transparency algorithms can entail, with pressure growing to apply standards of ethics or “accountability.”

Data scientist Cathy O’Neil cautions about “blindly trusting” formulas to determine a fair outcome.

“Algorithms are not inherently fair, because the person who builds the model defines success,” she said.

O’Neil argues that while some algorithms may be helpful, others can be nefarious. In her 2016 book, “Weapons of Math Destruction,” she cites some troubling examples in the United States:

  • Public schools in Washington DC in 2010 fired more than 200 teachers—including several well-respected instructors—based on scores in an algorithmic formula which evaluated performance.
  • A man diagnosed with bipolar disorder was rejected for employment at seven major retailers after a third-party “personality” test deemed him a high risk based on its algorithmic classification.
  • Many jurisdictions are using “predictive policing” to shift resources to likely “hot spots.” O’Neill says that depending on how data is fed into the system, this could lead to discovery of more minor crimes and a “feedback loop” which stigmatizes poor communities.
  • Some courts rely on computer-ranked formulas to determine jail sentences and parole, which may discriminate against minorities by taking into account “risk” factors such as their neighborhoods and friend or family links to crime.
  • In the world of finance, brokers “scrape” data from online and other sources in new ways to make decisions on credit or insurance. This too often amplifies prejudice against the disadvantaged, O’Neil argues.

Her findings were echoed in a White House report last year warning that algorithmic systems “are not infallible—they rely on the imperfect inputs, logic, probability, and people who design them.”

817

“Yahoo has a creepy plan for advertising billboards to spy on you”

Yahoo has filed a patent for a type of smart billboard that would collect people’s information and use it to deliver targeted ad content in real-time.

To achieve that functionality, the billboards would use a variety of sensor systems, including cameras and proximity technology, to capture real-time audio, video and even biometric information about potential target audiences.

But the tech company doesn’t just want to know about a passing vehicle. It also wants to know who the occupants are inside of it.

That’s why Yahoo is prepared to cooperate with cell towers and telecommunications companies to learn as much as possible about each vehicle’s occupants.”

“Various types of data (e.g., cell tower data, mobile app location data, image data, etc.) can be used to identify specific individuals in an audience in position to view advertising content. Similarly, vehicle navigation/tracking data from vehicles equipped with such systems could be used to identify specific vehicles and/or vehicle owners. Demographic data (e.g., as obtained from a marketing or user database) for the audience can thus be determined for the purpose of, for example, determining whether and/or the degree to which the demographic profile of the audience corresponds to a target demographic.”

789

Machine Logic: Our lives are ruled by big tech’s decisions by data

The Guardian’s Julia Powles writes about how with the advent of artificial intelligence and so-called “machine learning,” this society is increasingly a world where decisions are more shaped by calculations and data analytics rather than traditional human judgement:

“Jose van Dijck, president of the Dutch Royal Academy and the conference’s keynote speaker, expands: Datification is the core logic of what she calls “the platform society,” in which companies bypass traditional institutions, norms and codes by promising something better and more efficient — appealing deceptively to public values, while obscuring private gain. Van Dijck and peers have nascent, urgent ideas. They commence with a pressing agenda for strong interdisciplinary research — something Kate Crawford is spearheading at Microsoft Research, as are many other institutions, including the new Leverhulme Centre for the Future of Intelligence. There’s the old theory to confront, that this is a conscious move on the part of consumers and, if so, there’s always a theoretical opt-out. Yet even digital activists plot by Gmail, concedes Fieke Jansen of the Berlin-based advocacy organisation Tactical Tech. The Big Five tech companies, as well as the extremely concentrated sources of finance behind them, are at the vanguard of “a society of centralized power and wealth. “How did we let it get this far?” she asks. Crawford says there are very practical reasons why tech companies have become so powerful. “We’re trying to put so much responsibility on to individuals to step away from the ‘evil platforms,’ whereas in reality, there are so many reasons why people can’t. The opportunity costs to employment, to their friends, to their families, are so high” she says.”

826

Dark Patterns: User Interfaces Designed to Trick People

“A Dark Pattern is a user interface that has been carefully crafted to trick users into doing things, they wouldn’t normally do.”

“Normally when you think of “bad design”, you think of the creator as being sloppy or lazy but with no ill intent. This type of bad design is known as a “UI anti-pattern”. Dark Patterns are different – they are not mistakes, they are carefully crafted with a solid understanding of human psychology, and they do not have the user’s interests in mind.”

“London-based UX designer Harry Brignull documents this on his website, darkpatterns.org, offering plenty of examples of deliberately confusing or deceptive user interfaces. These dark patterns trick unsuspecting users into a gamut of actions: setting up recurring payments, purchasing items surreptitiously added to a shopping cart, or spamming all contacts through prechecked forms on Facebook games, etc.”

716

What Stores See When They Spy on Shoppers

800