Resources

Phones Can Now Tell Who Is Carrying Them From Their Users’ Gaits

Most online fraud involves identity theft, which is why businesses that operate on the web have a keen interest in distinguishing impersonators from genuine customers. Passwords help. But many can be guessed or are jotted down imprudently. Newer phones, tablets, and laptop and desktop computers often have beefed-up security with fingerprint and facial recognition. But these can be spoofed. To overcome these shortcomings the next level of security is likely to identify people using things which are harder to copy, such as the way they walk. Many online security services already use a system called device fingerprinting. This employs software to note things like the model type of a gadget employed by a particular user; its hardware configuration; its operating system; the apps which have been downloaded onto it; and other features, including sometimes the Wi-Fi networks it regularly connects through and devices like headsets it plugs into.

LexisNexis Risk Solutions, an American analytics firm, has catalogued more than 4 billion phones, tablets and other computers in this way for banks and other clients. Roughly 7% of them have been used for shenanigans of some sort. But device fingerprinting is becoming less useful. Apple, Google and other makers of equipment and operating systems have been steadily restricting the range of attributes that can be observed remotely. That is why a new approach, behavioral biometrics, is gaining ground. It relies on the wealth of measurements made by today’s devices. These include data from accelerometers and gyroscopic sensors, that reveal how people hold their phones when using them, how they carry them and even the way they walk. Touchscreens, keyboards and mice can be monitored to show the distinctive ways in which someone’s fingers and hands move. Sensors can detect whether a phone has been set down on a hard surface such as a table or dropped lightly on a soft one such as a bed. If the hour is appropriate, this action could be used to assume when a user has retired for the night. These traits can then be used to determine whether someone attempting to make a transaction is likely to be the device’s habitual user.

If used wisely, the report says behavioral biometrics could be used to authenticate account-holders without badgering them for additional passwords or security questions; it could even be used for unlocking the doors of a vehicle once the gait of the driver, as measured by his phone, is recognized, for example.

“Used unwisely, however, the system could become yet another electronic spy, permitting complete strangers to monitor your actions, from the moment you reach for your phone in the morning, to when you fling it on the floor at night,” the report adds.

Thanks To Facebook, Your Cellphone Company Is Watching You More Closely Than Ever

A confidential Facebook document reviewed by The Intercept shows that Facebook courts carriers, along with phone makers — some 100 different companies in 50 countries — by offering the use of even more surveillance data, pulled straight from your smartphone by Facebook itself.

Offered to select Facebook partners, the data includes not just technical information about Facebook members’ devices and use of Wi-Fi and cellular networks, but also their past locations, interests, and even their social groups. This data is sourced not just from the company’s main iOS and Android apps, but from Instagram and Messenger as well. The data has been used by Facebook partners to assess their standing against competitors, including customers lost to and won from them, but also for more controversial uses like racially targeted ads.

Some experts are particularly alarmed that Facebook has marketed the use of the information — and appears to have helped directly facilitate its use, along with other Facebook data — for the purpose of screening customers on the basis of likely creditworthiness. Such use could potentially run afoul of federal law, which tightly governs credit assessments. Facebook said it does not provide creditworthiness services and that the data it provides to cellphone carriers and makers does not go beyond what it was already collecting for other uses.

Google Uses Gmail To Track a History of Things You Buy — and It’s Hard To Delete

CNBC’s Todd Haselton has discovered that Google saves years of information on the purchases you’ve made, even outside Google, and pulls this information from Gmail.

A page called “Purchases” shows an accurate list of many — though not all — of the things I’ve bought dating back to at least 2012. I made these purchases using online services or apps such as Amazon, DoorDash or Seamless, or in stores such as Macy’s, but never directly through Google. But because the digital receipts went to my Gmail account, Google has a list of info about my buying habits. Google even knows about things I long forgot I’d purchased, like dress shoes I bought inside a Macy’s store on Sept. 14, 2015.

But there isn’t an easy way to remove all of this. You can delete all the receipts in your Gmail inbox and archived messages. But, if you’re like me, you might save receipts in Gmail in case you need them later for returns. There is no way to delete them from Purchases without also deleting them from Gmail — when you click on the “Delete” option in Purchases, it simply guides you back to the Gmail message. Google’s privacy page says that only you can view your purchases. But it says “Information about your orders may also be saved with your activity in other Google services ” and that you can see and delete this information on a separate “My Activity” page. Except you can’t. Google’s activity controls page doesn’t give you any ability to manage the data it stores on Purchases.

Google says you can turn off the tracking entirely, but when CNBC tried this, it didn’t work.

Google Home calls the Police, always listening

According to ABC News, officers were called to a home outside Albuquerque, New Mexico this week when a Google Home called 911 and the operator heard a confrontation in the background. Police say that Eduardo Barros was house-sitting at the residence with his girlfriend and their daughter. Barros allegedly pulled a gun on his girlfriend when they got into an argument and asked her: “Did you call the sheriffs?” Google Home apparently heard “call the sheriffs,” and proceeded to call the sheriffs. A SWAT team arrived at the home and after negotiating for hours, they were able to take Barros into custody… “The unexpected use of this new technology to contact emergency services has possibly helped save a life,” Bernalillo County Sheriff Manuel Gonzales III said in a statement.

“It’s easy to imagine police getting tired of being called to citizen’s homes every time they watch the latest episode of Law and Order,” quips Gizmodo. But they also call the incident “a clear reminder that smart home devices are always listening.”

Chinese companies using GPS tracking device smartwatches to monitor, alert street cleaners

Street cleaners in parts of China are reportedly being forced to wear GPS-tracking smartwatches so employers can monitor how hard they work, sparking public outrage and concern over increasing mass surveillance across the country.

If the smartwatch detects a worker standing still for over 20 minutes, it sounds an alarm. “Add oil, add oil [work harder, work harder!],” the wristbands’ alarm says, several cleaners from the eastern city of Nanjing told Jiangsu Television earlier this month.

The smartwatch not only tracks the cleaners’ locations but also reports their activity back to the company’s control room, where a big screen displays their locations as a cluster of red dots on a map.

“It knows everything,” an anonymous cleaner told a reporter in the Jiangsu Television report. “Supervisors will come if we don’t move after hearing the alarm.”

Following backlash, the company said it removed the alarm function from the smartwatch, but reports maintain the employees are still being required to wear the device so their location can be tracked.

The Chinese Government is already in the process of building a Social Credit System aimed at monitoring the behaviour of its 1.4 billion citizens with the help an extensive network of CCTV cameras and facial recognition technology.

Senior researcher for Human Rights Watch China Maya Wang said the use of surveillance technology by the Government was sending private companies a message that it was “okay to [monitor] people”.

We’re All Being Judged By a Secret ‘Trustworthiness’ Score

Nearly everything we buy, how we buy, and where we’re buying from is secretly fed into AI-powered verification services that help companies guard against credit-card and other forms of fraud, according to the Wall Street Journal.

More than 16,000 signals are analyzed by a service called Sift, which generates a “Sift score” ranging from 1 to 100. The score is used to flag devices, credit cards and accounts that a vendor may want to block based on a person or entity’s overall “trustworthiness” score, according to a company spokeswoman.

From the Sift website: “Each time we get an event be it a page view or an API event we extract features related to those events and compute the Sift Score. These features are then weighed based on fraud we’ve seen both on your site and within our global network, and determine a user’s Score. There are features that can negatively impact a Score as well as ones which have a positive impact.”

The system is similar to a credit score except there’s no way to find out your own Sift score.

Airbnb Has a Hidden-Camera Problem

Airbnb’s rules allow cameras outdoors and in living rooms and common areas, but never in bathrooms or anywhere guests plan to sleep, including rooms with foldout beds. Starting in early 2018, Airbnb added another layer of disclosure: If hosts indicate they have cameras anywhere on their property, guests receive a pop-up informing them where the cameras are located and where they are aimed. To book the property, the guests must click “agree,” indicating that they’re aware of the cameras and consent to being filmed.

Of course, hosts have plenty of reason to train cameras on the homes they rent out to strangers. They can catch guests who attempt to steal, or who trash the place, or who initially say they’re traveling alone, then show up to a property with five people. A representative for Airbnb’s Trust & Safety communications department told me the company tries to filter out hosts who may attempt to surveil guests by matching them against sex-offender and felony databases. The company also uses risk scores to flag suspicious behavior, in addition to reviewing and booting hosts with consistently poor scores.

If a guest contacts Airbnb’s Trust & Safety team with a complaint about a camera, employees offer new accommodations if necessary and open an investigation into the host. […] But four guests who found cameras in their rentals told The Atlantic the company has inconsistently applied its own rules when investigating their claims, providing them with incorrect information and making recommendations that they say risked putting them in harm’s way. “There have been super terrible examples of privacy violations by AirBnB hosts, e.g., people have found cameras hidden in alarm clocks in their bedrooms,” wrote Jeff Bigham, a computer-science professor at Carnegie Mellon whose claim was initially denied after he reported cameras in his rental. “I feel like our experience is in some ways more insidious. If you find a truly hidden camera in your bedroom or bathroom, Airbnb will support you. If you find an undisclosed camera in the private living room, Airbnb will not support you.”

Amazon and Google Fight Bill That Prohibits Secretly Recording You

On Wednesday, the Illinois State Senate passed the Keep Internet Devices Safe Act, a bill that would ban manufacturers of devices that can record audio from doing so remotely without disclosing it to the customer. But after lobbying from trade associations that represent the interests of Google, Amazon — makers of the microphone-enabled Google Home and Alexa smart speakers, respectively — and Microsoft, among other companies, the interests of big tech won out… In its current, neutered form, the bill provides exclusive authority to the Attorney General to enforce the Act, which means regular citizens won’t be able to bring forward a case regarding tech giants recording them in their homes.

Ars Technica notes the move comes after Amazon admitted thousands of their employees listen to Alexa recordings — “something not mentioned in Echo’s terms of service or FAQ pages.”

Vice points out that sometimes those recordings are shared “even after users opt out of having their data used in the program.”

Police Are Using Google’s Location Data From ‘Hundreds of Millions’ of Phones

Police have used information from the search giant’s Sensorvault database to aid in criminal cases across the country, according to a report Saturday by The New York Times. The database has detailed location records from hundreds of millions of phones around the world, the report said. It’s meant to collect information on the users of Google’s products so the company can better target them with ads, and see how effective those ads are. But police have been tapping into the database to help find missing pieces in investigations.

Law enforcement can get “geofence” warrants seeking location data. Those kinds of requests have spiked in the last six months, and the company has received as many as 180 requests in one week, according to the report…. For geofence warrants, police carve out a specific area and time period, and Google can gather information from Sensorvault about the devices that were present during that window, according to the report. The information is anonymous, but police can analyze it and narrow it down to a few devices they think might be relevant to the investigation. Then Google reveals those users’ names and other data, according to the Times…

[T]he AP reported last year that Google tracked people’s location even after they’d turned off location-sharing on their phones.

Google’s data dates back “nearly a decade,” the Times reports — though in a statement, Google’s director of law enforcement and information security insisted “We vigorously protect the privacy of our users while supporting the important work of law enforcement.” (The Times also interviewed a man who was arrested and jailed for a week last year based partly on Google’s data — before eventually being released after the police found a more likely suspect.)

Two-Thirds of Consumers Don’t Expect Google To Track Them the Way It Does

Last summer, an AP investigation found that Google’s location tracking remains on even if you turn it off in Google Maps, Search, and other apps. Research from Vanderbilt professor Douglas Schmidt found that Google engages in “passive” data collection, often without the user’s knowledge. His research also showed that Google utilizes data collected from other sources to de-anonymize existing user data.

Digital Content Next, the trade association of online publishers, surveyed a nationally representative sample to find out what people expect from Google — and, as with a similar study we conducted last year about Facebook, the results were unsettling.

Our findings show that many of Google’s data practices deviate from consumer expectations. We find it even more significant that consumer’s expectations are at an all-time low even after 2018, a year in which awareness around consumer privacy reached peak heights.

The results of the study are consistent with our Facebook study: People don’t want surveillance advertising. A majority of consumers indicated they don’t expect to be tracked across Google’s services, let alone be tracked across the web in order to make ads more targeted.

There was only one question where a small majority of respondents felt that Google was acting according to their expectations. That was about Google merging data from search queries with other data it collects on its own services. They also don’t expect Google to connect the data back to the user’s personal account, but only by a small majority. Google began doing both of these in 2016 after previously promising it wouldn’t.

‘Highly confidential’ documents reveal Facebook used VPN app to track competitors

Internal documents, made public as part of a cache of documents released by UK [Select Committee investigating Facebook], show just how close an eye the social network was keeping on competitors like WhatsApp and Snapchat, both of which became acquisition targets.

The documents, which are labeled “highly confidential,” show slides from an internal presentation in 2013 that compares Facebook’s reach to competing apps, including WhatsApp and Snapchat.

Facebook’s presentation relied on data from Onavo, the virtual private network (VPN) service which Facebook also acquired several months later. Facebook’s use of Onavo, which has been likened to “corporate spyware,” has itself been controversial.

The company was forced to remove Onavo from Apple’s App Store earlier this year after Apple changed its developer guidelines to prohibit apps from collecting data about which other services are installed on its users’ phones.

You Will Soon Be Able To Pay Your Subway Fare With Your Face in China

China has led the world in adoption of smartphone-based mobile payments to the point where the central bank had to remind merchants not to discriminate against cash. The next phase of development may be to pay with your face.

In Shenzhen, the local subway operator is testing various advanced technologies backed by the ultra-fast 5G network, including facial-recognition ticketing.

At the Futian station, instead of presenting a ticket or scanning a QR bar code on their smartphones, commuters can scan their faces on a tablet-sized screen mounted on the entrance gate and have the fare automatically deducted from their linked accounts.

Currently in a trial mode, the facial-recognition ticketing service could in future help improve the efficiency of handling the up to 5 million rides per day on the city’s subway network. Shenzhen Metro did not elaborate when it will roll out the facial payment service.

The introduction of facial recognition-and-payment services to the public transit system marks another step by China toward integrating facial recognition and other artificial intelligence-based technology into everyday life in the world’s most populous nation.

Consumers can already pay for fried chicken at KFC in China with its “Smile to Pay” facial recognition system, first introduced at an outlet in Hangzhou in January 2017.

“To use facial ticketing in the future, passengers will also need preregistration of their facial information and link their payment methods to their accounts, just like them making payments at the KFC restaurant,” said a staff member at the Futian station’s demonstration area in Shenzhen.
China may use facial recognition to stop kids from live streaming

Chinese cities are among the most digitally savvy and cashless in the world, with about 583 million people using their smartphones to make payment in China last year, according to the China Internet Network Information Center. Nearly 68 per cent of China’s internet users used a mobile wallet for their offline payments.

Hard Disks Can Be Turned Into Listening Devices

Researchers from the University of Michigan and Zhejiang Univeristy in China have found that hard disk drives can be turned into listening devices, using malicious firmware and signal processing calculations.

For a study titled “Hard Drive of Hearing: Disks that Eavesdrop with a Synthesized Microphone,” computer scientists Andrew Kwong, Wenyuan Xu, and Kevin Fu describe an acoustic side-channel that can be accessed by measuring how sound waves make hard disk parts vibrate. “Our research demonstrates that the mechanical components in magnetic hard disk drives behave as microphones with sufficient precision to extract and parse human speech,” their paper, obtained by The Register ahead of its formal publication, stated. “These unintentional microphones sense speech with high enough fidelity for the Shazam service to recognize a song recorded through the hard drive.”

The team’s research work, scheduled to be presented in May at the 2019 IEEE Symposium on Security and Privacy, explores how it’s possible to alter HDD firmware to measure the offset of a disk drive’s read/write head from the center of the track it’s seeking. The offset is referred to as the Positional Error Signal (PES) and hard drives monitor this signal to keep the read/write head in the optimal position for reading and writing data. PES measurements must be very fine because drive heads can only be off by a few nanometers before data errors arise. The sensitivity of the gear, however, means human speech is sufficient to move the needle, so to speak. Vibrations from HDD parts don’t yield particularly good sound, but with digital filtering techniques, human speech can be discerned, given the right conditions.

“Flashing HDD firmware is a prerequisite for the snooping […] because the ATA protocol does not expose the PES,” The Register reports. “To exfiltrate captured data, the three boffins suggest transmitting it over the internet by modifying Linux operating system files to create a reverse shell with root privileges or storing it to disk for physical recovery at a later date.”

The researchers note that this technique does require a fairly loud conversation to take place near the eavesdropping hard drive. “To record comprehensible speech, the conversation had to reach 85 dBA, with 75 dBA being the low threshold for capturing muffled sound,” the report says. “To get Shazam to identify recordings captured through a hard drive, the source file had to be played at 90 dBA. Which is pretty loud. Like lawn mower or food blender loud.”

Age of Surveillance Capitalism: “We Thought We Were Searching Google, But Google Was Searching Us”

Corporations have created a new kind of marketplace out of our private human experiences. That is the conclusion of an explosive new book that argues big tech platforms like Facebook and Google are elephant poachers, and our personal data is ivory tusks. Author Shoshana Zuboff writes in “The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power”: “At its core, surveillance capitalism is parasitic and self-referential. It revives Karl Marx’s old image of capitalism as a vampire that feeds on labor, but with an unexpected turn. Instead of labor, surveillance capitalism feeds on every aspect of every human’s experience.”

Facebook admits 18% of Research spyware users were teens, not <5%

Facebook has changed its story after initially trying to downplay how it targeted teens with its Research program that a TechCrunch investigation revealed was paying them gift cards to monitor all their mobile app usage and browser traffic. “Less than 5 percent of the people who chose to participate in this market research program were teens” a Facebook spokesperson told TechCrunch and many other news outlets in a damage control effort 7 hours after we published our report on January 29th. At the time, Facebook claimed that it had removed its Research app from iOS. The next morning we learned that wasn’t true, as Apple had already forcibly blocked the Facebook Research app for violating its Enterprise Certificate program that supposed to reserved for companies distributing internal apps to employees.

It turns out that wasn’t the only time Facebook deceived the public in its response regarding the Research VPN scandal. TechCrunch has obtained Facebook’s unpublished February 21st response to questions about the Research program in a letter from Senator Mark Warner, who wrote to CEO Mark Zuckerberg that “Facebook’s apparent lack of full transparency with users – particularly in the context of ‘research’ efforts – has been a source of frustration for me.”

In the response from Facebook’s VP of US public policy Kevin Martin, the company admits that (emphasis ours) “At the time we ended the Facebook Research App on Apple’s iOS platform, less than 5 percent of the people sharing data with us through this program were teens. Analysis shows that number is about 18 percent when you look at the complete lifetime of the program, and also add people who had become inactive and uninstalled the app.” So 18 percent of research testers were teens. It was only less than 5 percent when Facebook got caught. Given users age 13 to 35 were eligible for Facebook’s Research program, 13 to 18 year olds made of 22 percent of the age range. That means Facebook clearly wasn’t trying to minimize teen involvement, nor were they just a tiny fraction of users.

Unearthed Emails Show Google, Ad Giants Know They Break Privacy Laws

Privacy warriors have filed fresh evidence in their ongoing battle against real-time web ad exchange systems, which campaigners claim trample over Europe’s data protection laws. The new filings — submitted today to regulators in the UK, Ireland, and Poland — allege that Google and industry body the Interactive Advertising Bureau (IAB) are well aware that their advertising networks flout the EU’s privacy-safeguarding GDPR, and yet are doing nothing about it. The IAB, Google — which is an IAB member — and others in the ad-slinging world insist they aren’t doing anything wrong. The fresh submissions come soon after the UK Information Commissioner’s Office (ICO) revealed plans to probe programmatic ads. These are adverts that are selected and served on-the-fly as you visit a webpage, using whatever personal information has been scraped together about you to pick an ad most relevant to your interests. […] The ICO’s investigation will focus on how well informed people are about how their personal information is used for this kind of online advertising, which laws ad-technology firms rely on for processing said private data, and whether users’ data is secure as it is shared on these platforms.

Google Displays Fake Phone Numbers For Some Local Businesses In Toronto So They Can Record Calls

A spokesperson for Google has confirmed the service they’ve launched in Vancouver and Toronto to connect potential customers to trusted service providers funnels customers through ostensibly local phone numbers that are actually owned by Google for the purpose of call monitoring.

Google Local Services is an addition to its search platform that connects potential customers to local service providers who pay for the advertising. It launched in Toronto and Vancouver last December for locksmiths and heating, cooling and ventilation professionals. When someone in Toronto searches for a locksmith, for example, they’ll see some service providers with green check marks next to the company name, meaning they’ve been vetted by Google.

The number next to the listing has a local area code, but that’s not the business’ real contact info. Instead, it’s a dummy Google number that will route you to the business — after informing you that it will be recording anything you say.

Facebook Pays Teens To Install VPN That Spies On Them

Since 2016, Facebook has been paying users ages 13 to 35 up to $20 per month plus referral fees to sell their privacy by installing the iOS or Android “Facebook Research” app. Facebook even asked users to screenshot their Amazon order history page. The program is administered through beta testing services Applause, BetaBound and uTest to cloak Facebook’s involvement, and is referred to in some documentation as “Project Atlas” a fitting name for Facebook’s effort to map new trends and rivals around the globe.

We asked Guardian Mobile Firewall’s security expert Will Strafach to dig into the Facebook Research app, and he told us that “If Facebook makes full use of the level of access they are given by asking users to install the Certificate, they will have the ability to continuously collect the following types of data: private messages in social media apps, chats from in instant messaging apps — including photos/videos sent to others, emails, web searches, web browsing activity, and even ongoing location information by tapping into the feeds of any location tracking apps you may have installed.” It’s unclear exactly what data Facebook is concerned with, but it gets nearly limitless access to a user’s device once they install the app.

Prisons Across the United States Are Quietly Building Databases of Incarcerated People’s Voice Prints

In New York and other states across the country, authorities are acquiring technology to extract and digitize the voices of incarcerated people into unique biometric signatures, known as voice prints.

Prison authorities have quietly enrolled hundreds of thousands of incarcerated people’s voice prints into large-scale biometric databases. Computer algorithms then draw on these databases to identify the voices taking part in a call and to search for other calls in which the voices of interest are detected. Some programs, like New York’s, even analyze the voices of call recipients outside prisons to track which outsiders speak to multiple prisoners regularly.

Corrections officials representing the states of Texas, Florida, and Arkansas, along with Arizona’s Yavapai and Pinal counties; Alachua County, Florida; and Travis County, Texas, also confirmed that they are actively using voice recognition technology today. And a review of contracting documents identified other jurisdictions that have acquired similar voice-print capture capabilities: Connecticut and Georgia state corrections officials have signed contracts for the technology

Authorities and prison technology companies say this mass biometric surveillance supports prison security and fraud prevention efforts. But civil liberties advocates argue that the biometric buildup has been neither transparent nor consensual. Some jurisdictions, for example, limit incarcerated people’s phone access if they refuse to enroll in the voice recognition system, while others enroll incarcerated people without their knowledge. Once the data exists, they note, it could potentially be used by other agencies, without any say from the public.

Attackers Can Track Kids’ Locations Via Connected Watches

Over the last year of looking at kids GPS tracking watches we have found some staggering issues. With these devices it almost seems that having multiple security issues is the new normal.

While parents and guardians may get a feeling of security from using these devices, our testing and research shows it’s just that, a “feeling”.

A couple of years ago we bought and reviewed a number of smart kids tracker watches, including some Gator watches from TechSixtyFour.

After chatting to our friends at the Norwegian Consumer Council, who we know well through My Friend Cayla, we discovered they were working on exactly the same tech, by complete coincidence!

We decided to pause our project to avoid us duplicating their efforts. Shortly after, the Norwegian Consumers Council published the excellent ‘WatchOut’ research that demonstrated trivial access to kids GPS locations through vulnerable tracker watches, including the Gator.

It received plenty of press coverage and resulted in several kids tracker watches taking swift action to secure their systems.

A year on, we decided to have a look at the Gator watch again to see how their security had improved as a result of their actions.
TL; DR

Guess what: a train wreck. Anyone could access the entire database, including real time child location, name, parents details etc. Not just Gator watches either – the same back end covered multiple brands and tens of thousands of watches

The Gator web backend was passing the user level as a parameter. Changing that value to another number gave super admin access throughout the platform. The system failed to validate that the user had the appropriate permission to take admin control!

This means that an attacker could get full access to all account information and all watch information. They could view any user of the system and any device on the system, including its location. They could manipulate everything and even change users’ emails/passwords to lock them out of their watch.