Resources

Tenants Outraged Over New York Landlord’s Plan To Install Facial Recognition Technology

A Brooklyn landlord plans to install facial recognition technology at the entrance of a 700-unit building, according to Gothamist, “raising alarm among tenants and housing rights attorneys about what they say is a far-reaching and egregious form of digital surveillance.”

[Last] Sunday, several tenants told Gothamist that, unbeknownst to them, their landlord, Nelson Management, had sought state approval in July 2018 to install a facial recognition system known as StoneLock. Under state rules, landlords of rent-regulated apartments built before 1974 must seek permission from the state’s Homes and Community Renewal (HCR) for any “modification in service.” Tenants at the two buildings, located at 249 Thomas S. Boyland Street and 216 Rockaway Avenue, said they began receiving notices about the system in the fall. According to its website, Kansas-based company StoneLock offers a “frictionless” entry system that collects biometric data based on facial features. “We don’t want to be tracked,” said Icemae Downes, a longtime tenant. “We are not animals. This is like tagging us through our faces because they can’t implant us with a chip.”

It is not clear how many New York City apartments are using facial scanning software or how such technology is being regulated. But in a sign of the times, the city’s Department of Housing Preservation and Development last June began marketing 107 affordable units at a new apartment complex in the South Bronx. Among the amenities listed was “State of the Art Facial Recognition Building Access….” Across the real estate industry, New York City landlords have increasingly been moving to keyless entry systems, citing convenience as well as a desire to offer enhanced security. Over the years, in response to appeals filed by tenants, HCR has ruled in favor of key fob and card entry systems, saying that such substitutions did not violate rent-stabilization and rent-control laws. But the latest technology has triggered even more concerns about the ethics of data collection….

Last month, the management company reached out to a group of tenants to assuage their concerns about StoneLock. But tenants said the presentation, if anything, only deepened their fears that they were being asked to submit to a technology that had very little research behind it.

“This was not something we asked for at any given time,” one tenant complaint, while one of the attorneys representing the tenants said that, among other things, their landlord had “made no assurances to protect the data from being accessed by NYPD, ICE, or any other city, state, or federal agency.”

“Citing concerns over the potential for privacy and civil liberties violations, tenants at Brownsville’s Atlantic Plaza Towers filed an objection to the plan in January…”

We’re All Being Judged By a Secret ‘Trustworthiness’ Score

Nearly everything we buy, how we buy, and where we’re buying from is secretly fed into AI-powered verification services that help companies guard against credit-card and other forms of fraud, according to the Wall Street Journal.

More than 16,000 signals are analyzed by a service called Sift, which generates a “Sift score” ranging from 1 to 100. The score is used to flag devices, credit cards and accounts that a vendor may want to block based on a person or entity’s overall “trustworthiness” score, according to a company spokeswoman.

From the Sift website: “Each time we get an event be it a page view or an API event we extract features related to those events and compute the Sift Score. These features are then weighed based on fraud we’ve seen both on your site and within our global network, and determine a user’s Score. There are features that can negatively impact a Score as well as ones which have a positive impact.”

The system is similar to a credit score except there’s no way to find out your own Sift score.

Facebook is Demanding Some Users Share the Password For Their Outside Email Account

Just two weeks after admitting it stored hundreds of millions of its users’ own passwords insecurely, Facebook is demanding some users fork over the password for their outside email account as the price of admission to the social network. Facebook users are being interrupted by an interstitial demanding they provide the password for the email account they gave to Facebook when signing up. “To continue using Facebook, you’ll need to confirm your email,” the message demands. “Since you signed up with [email address], you can do that automatically …” A form below the message asked for the users’ “email password.”

“That’s beyond sketchy,” security consultant Jake Williams told the Daily Beast. “They should not be taking your password or handling your password in the background. If that’s what’s required to sign up with Facebook, you’re better off not being on Facebook.” In a statement emailed to the Daily Beast after this story published, Facebook reiterated its claim it doesn’t store the email passwords. But the company also announced it will end the practice altogether. “We understand the password verification option isn’t the best way to go about this, so we are going to stop offering it,” Facebook wrote. It’s not clear how widely the new measure was deployed, but in its statement Facebook said users retain the option of bypassing the password demand and activating their account through more conventional means, such as “a code sent to their phone or a link sent to their email.” Those options are presented to users who click on the words “Need help?” in one corner of the page.

Fake Cancerous Nodes in CT Scans, Created By Malware, Trick Radiologists

Researchers in Israel say they have developed malware to draw attention to serious security weaknesses in critical medical imaging equipment used for diagnosing conditions and the networks that transmit those images — vulnerabilities that could have potentially life-altering consequences if unaddressed. The malware they created would let attackers automatically add realistic, malignant-seeming growths to CT or MRI scans before radiologists and doctors examine them. Or it could remove real cancerous nodules and lesions without detection, leading to misdiagnosis and possibly a failure to treat patients who need critical and timely care.

Yisroel Mirsky, Yuval Elovici and two others at the Ben-Gurion University Cyber Security Research Center in Israel who created the malware say that attackers could target a presidential candidate or other politicians to trick them into believing they have a serious illness and cause them to withdraw from a race to seek treatment. The research isn’t theoretical. In a blind study the researchers conducted involving real CT lung scans, 70 of which were altered by their malware, they were able to trick three skilled radiologists into misdiagnosing conditions nearly every time. In the case of scans with fabricated cancerous nodules, the radiologists diagnosed cancer 99 percent of the time. In cases where the malware removed real cancerous nodules from scans, the radiologists said those patients were healthy 94 percent of the time.

FamilyTreeDNA Deputizes Itself, Starts Pitching DNA Matching Services To Law Enforcement

One DNA-matching company has decided it’s going to corner an under-served market: US law enforcement. FamilyTreeDNA — last seen here opening up its database to the FBI without informing its users first — is actively pitching its services to law enforcement.

FamilyTreeDNA sounds like it’s finally going to seek consent from its customers, but only after having abused their trust once and under the assumption they’re all going to play ball. While some DNA companies like 23andMe are insisting on at least a subpoena before handing over access to DNA database search results, other companies are staying quiet about law enforcement access or specifically targeting law enforcement agencies with ads promising to help them work through their cold case files.

Consent is great, but it’s never going to be complete consent, no matter how FamilyTreeDNA shapes the argument. As Elizabeth Joh points out at Slate, there’s a whole lot of people involved who will never be asked for their consent once a customer agrees to allow DNA-matching sites to hand over their samples to law enforcement.

[W]hen you volunteer your DNA sample, you’re volunteering your genetic family tree, without having asked your parents, siblings, cousins, and distant cousins if they agree. That upends the usual way we think about providing information to law enforcement. You can’t give the police lawful consent to search your third cousin’s house, even if your third cousin (who you may never have met) is suspected of having been involved in a serious crime. Why are we allowing a distant relative to grant police permission to your DNA?

There’s no informed consent happening here. Customers are being treated as data points law enforcement can peruse at its leisure. A customer who agrees to be a good citizen (by clicking OK on a submission box on a private company’s website) may learn later their sample was used to lock up a close relative. Some people will be fine with this outcome. Others may regret being the critical piece of evidence used to incarcerate one of their relatives.

Whatever the case is, very few companies are being upfront about the effects of opening up database access to law enforcement. FamilyTreeDNA is using a crime victim’s parent and the founder’s Team Blue sympathies to hustle its customers towards compliance. Users who don’t like this turn of events will likely find it far more difficult to remove their DNA from FamilyTreeDNA’s database than simply hold their nose and become an willing part of this partnership.

Airbnb Has a Hidden-Camera Problem

Airbnb’s rules allow cameras outdoors and in living rooms and common areas, but never in bathrooms or anywhere guests plan to sleep, including rooms with foldout beds. Starting in early 2018, Airbnb added another layer of disclosure: If hosts indicate they have cameras anywhere on their property, guests receive a pop-up informing them where the cameras are located and where they are aimed. To book the property, the guests must click “agree,” indicating that they’re aware of the cameras and consent to being filmed.

Of course, hosts have plenty of reason to train cameras on the homes they rent out to strangers. They can catch guests who attempt to steal, or who trash the place, or who initially say they’re traveling alone, then show up to a property with five people. A representative for Airbnb’s Trust & Safety communications department told me the company tries to filter out hosts who may attempt to surveil guests by matching them against sex-offender and felony databases. The company also uses risk scores to flag suspicious behavior, in addition to reviewing and booting hosts with consistently poor scores.

If a guest contacts Airbnb’s Trust & Safety team with a complaint about a camera, employees offer new accommodations if necessary and open an investigation into the host. […] But four guests who found cameras in their rentals told The Atlantic the company has inconsistently applied its own rules when investigating their claims, providing them with incorrect information and making recommendations that they say risked putting them in harm’s way. “There have been super terrible examples of privacy violations by AirBnB hosts, e.g., people have found cameras hidden in alarm clocks in their bedrooms,” wrote Jeff Bigham, a computer-science professor at Carnegie Mellon whose claim was initially denied after he reported cameras in his rental. “I feel like our experience is in some ways more insidious. If you find a truly hidden camera in your bedroom or bathroom, Airbnb will support you. If you find an undisclosed camera in the private living room, Airbnb will not support you.”

Facebook Ad Platform Could Be Inherently Discriminatory, Researchers Say

Researchers from Northeastern Unviersity, the University of Southern Carolina, and tech accountability non-profit Upturn have released a paper that says Facebook’s ad delivery system itself can steer ads intended to be inclusive toward discrimination without explicit intent. “In a paper titled, ‘Discrimination through optimization: How Facebook’s ad delivery can lead to skewed outcomes,’ co-authors Muhammad Ali, Piotr Sapiezynski, Miranda Bogen, Aleksandra Korolova, Alan Mislove, and Aaron Rieke find that advertiser budgets and ad content affect ad delivery, skewing it along gender and racial lines even when neutral ad targeting settings are used,” reports The Register. From the report:
The researchers found that Facebook ads tend to be shown to men because women tend to click on ads more often, making them more expensive to reach through Facebook’s system. That divide becomes apparent when ad budgets are compared, because the ad budget affects ad distribution. As the paper explains, “the higher the daily budget, the smaller the fraction of men in the audience.” Such segregation may be appropriate and desirable for certain types of marketing pitches, but when applied to credit, employment and housing ads, the consequences can be problematic.

Ad content — text and images — also has a strong effect on whether ads get shown to men or women, even when the bidding strategy is the same and gender-agnostic targeting is used. In particular, the researchers found images had a surprisingly large effect on ad delivery. Ad URL destination has some effect — an ad pointing to a bodybuilding site and an ad pointing to a cosmetics site had a baseline delivery distribution of 48 percent men and 40 percent men respectively. The addition of a title and headline doesn’t change that much. But once the researchers added an image to the ad, the distribution pattern changed, with the bodybuilding site ad reaching an audience that was 75 percent male and the cosmetics ad reaching an audience that was 90 percent female. According to the researchers, their tests suggest, “Facebook has an automated image classification mechanism in place that is used to steer different ads towards different subsets of the user population.”

“In terms of credit, employment and housing ads, the problem with this system is that it discriminates where it shouldn’t: Five ads for lumber industry jobs were delivered to an audience that was more than 90 percent men and more than 70 percent white; five ads for janitorial work were delivered to an audience that was more than 65 percent women and 75 percent black,” the report adds. “Housing ads also showed a racial skew.”

The latest findings come after years of criticism of Facebook’s ad system. Last month, Facebook announced changes to the platform intended to prevent advertisers from deploying unfair credit, employment and housing ads. One week later, the U.S. Department of Housing and Urban Development sued Facebook for violating the Fair Housing Act.

Mark Zuckerberg Leveraged Facebook User Data To Fight Rivals and Help Friends, Leaked Documents Show

Facebook CEO Mark Zuckerberg once considered making deals with third-party developers just to help him find out how much users’ data is worth, NBC News reported on Tuesday. The report, which cites 4,000 leaked pages of internal documents, shines a light on the way senior company executives viewed attaching a dollar sign to sensitive user data, despite Facebook’s public commitment to protect such information. From the report:

In the same week, Zuckerberg floated the idea of pursuing 100 deals with developers “as a path to figuring out the real market value” of Facebook user data and then “setting a public rate” for developers. “The goal here wouldn’t be the deals themselves, but that through the process of negotiating with them we’d learn what developers would actually pay (which might be different from what they’d say if we just asked them about the value), and then we’d be better informed on our path to set a public rate,” Zuckerberg wrote in a chat. Facebook told NBC News that it was exploring ways to build a sustainable business, but ultimately decided not to go forward with these plans.

Zuckerberg was unfazed by the potential privacy risks associated with Facebook’s data-sharing arrangements. “I’m generally skeptical that there is as much data leak strategic risk as you think,” he wrote in the email to Lessin. “I think we leak info to developers but I just can’t think of any instances where that data has leaked from developer to developer and caused a real issue for us.”

The report also outlines how PR people at Facebook tries to spin things. An excerpt: In a March 2014 email discussing Zuckerberg’s keynote speech at the event, where he was due to announce the removal of developers’ access to friends’ data, Jonny Thaw, a director of communications, wrote that it “may be a tough message for some developers as it may inhibit their growth.” “So one idea that came up today was potentially talking in the keynote about some of the trust changes we’re making on Facebook itself. So the message would be: ‘trust is really important to us — on Facebook, we’re doing A, B and C to help people control and understand what they’re sharing — and with platform apps we’re doing D, E and F.'” If that doesn’t work, he added, “we could announce some of Facebook’s trust initiatives in the run up to F8” to make the changes for developers “seem more natural.”

Facebook Should Notify Users Who Interact With Fake Police ‘Sock Puppet’ Accounts

Despite Facebook’s repeated warnings that law enforcement is required to use “authentic identities” on the social media platform, cops continue to create fake and impersonator accounts to secretly spy on users. By pretending to be someone else, cops are able to sneak past the privacy walls users put up and bypass legal requirements that might require a warrant to obtain that same information.

EFF is now calling on Facebook to escalate the matter with law enforcement in the United States. Facebook should take the following actions to address the proliferation of fake/impersonator Facebook accounts operated by law enforcement, in addition to suspending the fake accounts. As part of its regular transparency reports, Facebook should publish data on the number of fake/impersonator law enforcement accounts identified, what agencies they belonged to, and what action was taken. When a fake/impersonator account is identified, Facebook should alert the users and groups that interacted with the account whether directly or indirectly.

The article also suggests updating Facebook’s Terms of Service to explicitly prohibit fake/impersonator profiles by law enforcement groups, and updating Facebook pages of law enforcement groups to inform visitors when those groups have a written policy allowing fake/impersonator law enforcement accounts. “These four changes are relatively light lifts that would enhance transparency and establish real consequences for agencies that deliberately violate the rules…”

“Facebook’s practice of taking down these individual accounts when they learn about them from the press (or from EFF) is insufficient to deter what we believe is a much larger iceberg beneath the surface.”

Are Phone-Addicted Drivers More Dangerous Than Drunk Drivers?

After crunching data on 4.5 billion miles of driving, road-safety analytics company Zendrive concludes there’s a new threat which just last year claimed the lives of 6,227 pedestrians: drivers “under the influence of a smartphone.”

The study points out that drunk driving fatalities peak after midnight, while distracted driving happens all day, conluding that distracted driving is now a bigger threat than drunk driving.

“Phone addicts are the new drunk drivers,” Zendrive concludes bluntly in its annual distracted driving study. The big picture: The continued increase in unsafe driving comes despite stricter laws in many states, as well as years of massive ad campaigns from groups ranging from cell phone carriers to orthopedic surgeons. “They hide in plain sight, blatantly staring at their phones while driving down the road,” Zendrive says in the study.

And it’s a growing problem. Over just the past year, Zendrive, which analyzes driver behavior for fleets and insurers, said the number of hardcore phone addicts doubled, now accounting for one in 12 drivers. If the current trend continues, that number will be one in five by 2022.

The report concludes drivers are 10 percent more distracted this year than last — and that phone addicts have their eyes off the road for 28% of their drive. Yet when asked to describe their driving, 93% of phone addicts said they believed they were “safe” — or “extremely safe” — drivers.

One even insisted that they never texted while driving, “but I like to FaceTime my friends while driving since it makes time go by faster.”

How DNA Companies Like Ancestry And 23andMe Are Using Your Genetic Data

In the past couple of years, genetic-testing companies like Ancestry and 23andMe have become popular for finding out family history and DNA information. More than 12 million Americans have sent in their DNA to be analyzed to companies like 23andMe and AncestryDNA. The spit-in-tube DNA you send in is anonymized and used for genetic drug research and both sites have been selling the data to third-party companies, like P&G Beauty and Pepto-Bismol, and universities, like The University of Chicago, for some time. In fact, just last week major pharmaceutical giant, GlaxoSmithKline, announced a $300 million deal with 23andMe. The deal entails that they can use the data to analyze the stored sample, investigate new drugs to develop and genetic data for how patients are selected for clinical trials. Both 23andMe and Ancestry said that they will not share genetic information freely, without a court order, but people are welcome to share the information online themselves sometimes in order to find lost relatives or biological parents.

Amazon and Google Fight Bill That Prohibits Secretly Recording You

On Wednesday, the Illinois State Senate passed the Keep Internet Devices Safe Act, a bill that would ban manufacturers of devices that can record audio from doing so remotely without disclosing it to the customer. But after lobbying from trade associations that represent the interests of Google, Amazon — makers of the microphone-enabled Google Home and Alexa smart speakers, respectively — and Microsoft, among other companies, the interests of big tech won out… In its current, neutered form, the bill provides exclusive authority to the Attorney General to enforce the Act, which means regular citizens won’t be able to bring forward a case regarding tech giants recording them in their homes.

Ars Technica notes the move comes after Amazon admitted thousands of their employees listen to Alexa recordings — “something not mentioned in Echo’s terms of service or FAQ pages.”

Vice points out that sometimes those recordings are shared “even after users opt out of having their data used in the program.”

Police Are Using Google’s Location Data From ‘Hundreds of Millions’ of Phones

Police have used information from the search giant’s Sensorvault database to aid in criminal cases across the country, according to a report Saturday by The New York Times. The database has detailed location records from hundreds of millions of phones around the world, the report said. It’s meant to collect information on the users of Google’s products so the company can better target them with ads, and see how effective those ads are. But police have been tapping into the database to help find missing pieces in investigations.

Law enforcement can get “geofence” warrants seeking location data. Those kinds of requests have spiked in the last six months, and the company has received as many as 180 requests in one week, according to the report…. For geofence warrants, police carve out a specific area and time period, and Google can gather information from Sensorvault about the devices that were present during that window, according to the report. The information is anonymous, but police can analyze it and narrow it down to a few devices they think might be relevant to the investigation. Then Google reveals those users’ names and other data, according to the Times…

[T]he AP reported last year that Google tracked people’s location even after they’d turned off location-sharing on their phones.

Google’s data dates back “nearly a decade,” the Times reports — though in a statement, Google’s director of law enforcement and information security insisted “We vigorously protect the privacy of our users while supporting the important work of law enforcement.” (The Times also interviewed a man who was arrested and jailed for a week last year based partly on Google’s data — before eventually being released after the police found a more likely suspect.)

More jails replace in-person visits with awful video chat products

After April 15, inmates at the Adult Detention Center in Lowndes County, Mississippi will no longer be allowed to visit with family members face to face. Newton County, Missouri, implemented an in-person visitor ban last month. The Allen County Jail in Indiana phased out in-person visits earlier this year. All three changes are part of a nationwide trend toward “video visitation” services. Instead of seeing their loved ones face to face, inmates are increasingly limited to talking to them through video terminals. Most jails give family members a choice between using video terminals at the jail — which are free — or paying fees to make calls from home using a PC or mobile device.

Even some advocates of the change admit that it has downsides for inmates and their families. Ryan Rickert, jail administrator at the Lowndes County Adult Detention Center, acknowledged to The Commercial Dispatch that inmates were disappointed they wouldn’t get to see family members anymore. Advocates of this approach point to an upside for families: they can now make video calls to loved ones from home instead of having to physically travel to the jail. These services are ludicrously expensive. Video calls cost 40 cents per minute in Newton County, 50 cents per minute in Lowndes County, and $10 per call in Allen County. Outside of prison, of course, video calls on Skype or FaceTime are free.

These “visitation” services are often “grainy and jerky, periodically freezing up altogether,” reports Ars. As for why so many jails are adopting them, it has a lot to do with money. “In-person visits are labor intensive. Prison guards need to escort inmates to and from visitation rooms, supervise the visits, and in some cases pat down visitors for contraband. In contrast, video terminals can be installed inside each cell block, minimizing the need to move inmates around the jail.” The video-visitation systems also directly generate revenue for jails.

Two-Thirds of Consumers Don’t Expect Google To Track Them the Way It Does

Last summer, an AP investigation found that Google’s location tracking remains on even if you turn it off in Google Maps, Search, and other apps. Research from Vanderbilt professor Douglas Schmidt found that Google engages in “passive” data collection, often without the user’s knowledge. His research also showed that Google utilizes data collected from other sources to de-anonymize existing user data.

Digital Content Next, the trade association of online publishers, surveyed a nationally representative sample to find out what people expect from Google — and, as with a similar study we conducted last year about Facebook, the results were unsettling.

Our findings show that many of Google’s data practices deviate from consumer expectations. We find it even more significant that consumer’s expectations are at an all-time low even after 2018, a year in which awareness around consumer privacy reached peak heights.

The results of the study are consistent with our Facebook study: People don’t want surveillance advertising. A majority of consumers indicated they don’t expect to be tracked across Google’s services, let alone be tracked across the web in order to make ads more targeted.

There was only one question where a small majority of respondents felt that Google was acting according to their expectations. That was about Google merging data from search queries with other data it collects on its own services. They also don’t expect Google to connect the data back to the user’s personal account, but only by a small majority. Google began doing both of these in 2016 after previously promising it wouldn’t.

‘Highly confidential’ documents reveal Facebook used VPN app to track competitors

Internal documents, made public as part of a cache of documents released by UK [Select Committee investigating Facebook], show just how close an eye the social network was keeping on competitors like WhatsApp and Snapchat, both of which became acquisition targets.

The documents, which are labeled “highly confidential,” show slides from an internal presentation in 2013 that compares Facebook’s reach to competing apps, including WhatsApp and Snapchat.

Facebook’s presentation relied on data from Onavo, the virtual private network (VPN) service which Facebook also acquired several months later. Facebook’s use of Onavo, which has been likened to “corporate spyware,” has itself been controversial.

The company was forced to remove Onavo from Apple’s App Store earlier this year after Apple changed its developer guidelines to prohibit apps from collecting data about which other services are installed on its users’ phones.

Survey: 25% Think AI Could Make Better Policy Decisions Than Politicians

A new survey on Europeans’ attitudes towards technology found that a quarter of people would prefer it if policy decisions were made by artificial intelligence instead of politicians. The Center for the Governance of Change at Spain’s IE University polled 2,500 adults in the UK, Spain, Germany, France, Ireland, Italy, and the Netherlands in January. The results reflect an intense anxiety about the changes brought about by advances in tech, with more than half of respondents worried that jobs would be replaced by robots, and 70% saying that unchecked technological innovation could do more harm than good to society. Respondents also expressed concerns about the impact of digital relationships replacing human contact as more people spend time online. Perhaps most interestingly, a quarter of the respondents said they would prefer AI to guide decisions about governance of their country over politicians.

Around the world, citizens have expressed a growing disillusionment with democracy, and an increased skepticism that their voice has an impact on political decisions. But algorithmic decisions aren’t a problem-free solution: they can be embedded with the prejudice and bias of their programmers or manipulated to achieve specific outcomes, making the results as potentially problematic as the ones made by humans. The study also found that respondents expected governments to reduce the disruption that technology might have on their lives with regulation, limits on automation, and support for people affected by job losses. This “highlights the paradox in which we live,” the authors wrote. “People are disillusioned with governments, yet at the same time ask them to tackle the societal and economic negative effects that emerging technologies might have.”

You Will Soon Be Able To Pay Your Subway Fare With Your Face in China

China has led the world in adoption of smartphone-based mobile payments to the point where the central bank had to remind merchants not to discriminate against cash. The next phase of development may be to pay with your face.

In Shenzhen, the local subway operator is testing various advanced technologies backed by the ultra-fast 5G network, including facial-recognition ticketing.

At the Futian station, instead of presenting a ticket or scanning a QR bar code on their smartphones, commuters can scan their faces on a tablet-sized screen mounted on the entrance gate and have the fare automatically deducted from their linked accounts.

Currently in a trial mode, the facial-recognition ticketing service could in future help improve the efficiency of handling the up to 5 million rides per day on the city’s subway network. Shenzhen Metro did not elaborate when it will roll out the facial payment service.

The introduction of facial recognition-and-payment services to the public transit system marks another step by China toward integrating facial recognition and other artificial intelligence-based technology into everyday life in the world’s most populous nation.

Consumers can already pay for fried chicken at KFC in China with its “Smile to Pay” facial recognition system, first introduced at an outlet in Hangzhou in January 2017.

“To use facial ticketing in the future, passengers will also need preregistration of their facial information and link their payment methods to their accounts, just like them making payments at the KFC restaurant,” said a staff member at the Futian station’s demonstration area in Shenzhen.
China may use facial recognition to stop kids from live streaming

Chinese cities are among the most digitally savvy and cashless in the world, with about 583 million people using their smartphones to make payment in China last year, according to the China Internet Network Information Center. Nearly 68 per cent of China’s internet users used a mobile wallet for their offline payments.

Paris street to ‘shut out Instagrammers’

Instagrammers love the colorful homes in Paris’s Rue Cremieux. But residents of Rue Cremieux have now had enough and are calling on the city council to restrict access at certain times.

Residents have asked the city council to provide a gate that can be closed at peak times — evenings, weekends and at sunrise and sunset, when good light attracts people searching for a perfect Instagram picture. One resident told radio station France Info: “We sit down to eat and just outside we have people taking photos, rappers who take two hours to film a video right beneath the window, or bachelorette parties who scream for an hour. Frankly, it’s exhausting.”

Hard Disks Can Be Turned Into Listening Devices

Researchers from the University of Michigan and Zhejiang Univeristy in China have found that hard disk drives can be turned into listening devices, using malicious firmware and signal processing calculations.

For a study titled “Hard Drive of Hearing: Disks that Eavesdrop with a Synthesized Microphone,” computer scientists Andrew Kwong, Wenyuan Xu, and Kevin Fu describe an acoustic side-channel that can be accessed by measuring how sound waves make hard disk parts vibrate. “Our research demonstrates that the mechanical components in magnetic hard disk drives behave as microphones with sufficient precision to extract and parse human speech,” their paper, obtained by The Register ahead of its formal publication, stated. “These unintentional microphones sense speech with high enough fidelity for the Shazam service to recognize a song recorded through the hard drive.”

The team’s research work, scheduled to be presented in May at the 2019 IEEE Symposium on Security and Privacy, explores how it’s possible to alter HDD firmware to measure the offset of a disk drive’s read/write head from the center of the track it’s seeking. The offset is referred to as the Positional Error Signal (PES) and hard drives monitor this signal to keep the read/write head in the optimal position for reading and writing data. PES measurements must be very fine because drive heads can only be off by a few nanometers before data errors arise. The sensitivity of the gear, however, means human speech is sufficient to move the needle, so to speak. Vibrations from HDD parts don’t yield particularly good sound, but with digital filtering techniques, human speech can be discerned, given the right conditions.

“Flashing HDD firmware is a prerequisite for the snooping […] because the ATA protocol does not expose the PES,” The Register reports. “To exfiltrate captured data, the three boffins suggest transmitting it over the internet by modifying Linux operating system files to create a reverse shell with root privileges or storing it to disk for physical recovery at a later date.”

The researchers note that this technique does require a fairly loud conversation to take place near the eavesdropping hard drive. “To record comprehensible speech, the conversation had to reach 85 dBA, with 75 dBA being the low threshold for capturing muffled sound,” the report says. “To get Shazam to identify recordings captured through a hard drive, the source file had to be played at 90 dBA. Which is pretty loud. Like lawn mower or food blender loud.”