Archives 17 April 2019

Tenants Outraged Over New York Landlord’s Plan To Install Facial Recognition Technology

A Brooklyn landlord plans to install facial recognition technology at the entrance of a 700-unit building, according to Gothamist, “raising alarm among tenants and housing rights attorneys about what they say is a far-reaching and egregious form of digital surveillance.”

[Last] Sunday, several tenants told Gothamist that, unbeknownst to them, their landlord, Nelson Management, had sought state approval in July 2018 to install a facial recognition system known as StoneLock. Under state rules, landlords of rent-regulated apartments built before 1974 must seek permission from the state’s Homes and Community Renewal (HCR) for any “modification in service.” Tenants at the two buildings, located at 249 Thomas S. Boyland Street and 216 Rockaway Avenue, said they began receiving notices about the system in the fall. According to its website, Kansas-based company StoneLock offers a “frictionless” entry system that collects biometric data based on facial features. “We don’t want to be tracked,” said Icemae Downes, a longtime tenant. “We are not animals. This is like tagging us through our faces because they can’t implant us with a chip.”

It is not clear how many New York City apartments are using facial scanning software or how such technology is being regulated. But in a sign of the times, the city’s Department of Housing Preservation and Development last June began marketing 107 affordable units at a new apartment complex in the South Bronx. Among the amenities listed was “State of the Art Facial Recognition Building Access….” Across the real estate industry, New York City landlords have increasingly been moving to keyless entry systems, citing convenience as well as a desire to offer enhanced security. Over the years, in response to appeals filed by tenants, HCR has ruled in favor of key fob and card entry systems, saying that such substitutions did not violate rent-stabilization and rent-control laws. But the latest technology has triggered even more concerns about the ethics of data collection….

Last month, the management company reached out to a group of tenants to assuage their concerns about StoneLock. But tenants said the presentation, if anything, only deepened their fears that they were being asked to submit to a technology that had very little research behind it.

“This was not something we asked for at any given time,” one tenant complaint, while one of the attorneys representing the tenants said that, among other things, their landlord had “made no assurances to protect the data from being accessed by NYPD, ICE, or any other city, state, or federal agency.”

“Citing concerns over the potential for privacy and civil liberties violations, tenants at Brownsville’s Atlantic Plaza Towers filed an objection to the plan in January…”

We’re All Being Judged By a Secret ‘Trustworthiness’ Score

Nearly everything we buy, how we buy, and where we’re buying from is secretly fed into AI-powered verification services that help companies guard against credit-card and other forms of fraud, according to the Wall Street Journal.

More than 16,000 signals are analyzed by a service called Sift, which generates a “Sift score” ranging from 1 to 100. The score is used to flag devices, credit cards and accounts that a vendor may want to block based on a person or entity’s overall “trustworthiness” score, according to a company spokeswoman.

From the Sift website: “Each time we get an event be it a page view or an API event we extract features related to those events and compute the Sift Score. These features are then weighed based on fraud we’ve seen both on your site and within our global network, and determine a user’s Score. There are features that can negatively impact a Score as well as ones which have a positive impact.”

The system is similar to a credit score except there’s no way to find out your own Sift score.

Facebook is Demanding Some Users Share the Password For Their Outside Email Account

Just two weeks after admitting it stored hundreds of millions of its users’ own passwords insecurely, Facebook is demanding some users fork over the password for their outside email account as the price of admission to the social network. Facebook users are being interrupted by an interstitial demanding they provide the password for the email account they gave to Facebook when signing up. “To continue using Facebook, you’ll need to confirm your email,” the message demands. “Since you signed up with [email address], you can do that automatically …” A form below the message asked for the users’ “email password.”

“That’s beyond sketchy,” security consultant Jake Williams told the Daily Beast. “They should not be taking your password or handling your password in the background. If that’s what’s required to sign up with Facebook, you’re better off not being on Facebook.” In a statement emailed to the Daily Beast after this story published, Facebook reiterated its claim it doesn’t store the email passwords. But the company also announced it will end the practice altogether. “We understand the password verification option isn’t the best way to go about this, so we are going to stop offering it,” Facebook wrote. It’s not clear how widely the new measure was deployed, but in its statement Facebook said users retain the option of bypassing the password demand and activating their account through more conventional means, such as “a code sent to their phone or a link sent to their email.” Those options are presented to users who click on the words “Need help?” in one corner of the page.

Fake Cancerous Nodes in CT Scans, Created By Malware, Trick Radiologists

Researchers in Israel say they have developed malware to draw attention to serious security weaknesses in critical medical imaging equipment used for diagnosing conditions and the networks that transmit those images — vulnerabilities that could have potentially life-altering consequences if unaddressed. The malware they created would let attackers automatically add realistic, malignant-seeming growths to CT or MRI scans before radiologists and doctors examine them. Or it could remove real cancerous nodules and lesions without detection, leading to misdiagnosis and possibly a failure to treat patients who need critical and timely care.

Yisroel Mirsky, Yuval Elovici and two others at the Ben-Gurion University Cyber Security Research Center in Israel who created the malware say that attackers could target a presidential candidate or other politicians to trick them into believing they have a serious illness and cause them to withdraw from a race to seek treatment. The research isn’t theoretical. In a blind study the researchers conducted involving real CT lung scans, 70 of which were altered by their malware, they were able to trick three skilled radiologists into misdiagnosing conditions nearly every time. In the case of scans with fabricated cancerous nodules, the radiologists diagnosed cancer 99 percent of the time. In cases where the malware removed real cancerous nodules from scans, the radiologists said those patients were healthy 94 percent of the time.

FamilyTreeDNA Deputizes Itself, Starts Pitching DNA Matching Services To Law Enforcement

One DNA-matching company has decided it’s going to corner an under-served market: US law enforcement. FamilyTreeDNA — last seen here opening up its database to the FBI without informing its users first — is actively pitching its services to law enforcement.

FamilyTreeDNA sounds like it’s finally going to seek consent from its customers, but only after having abused their trust once and under the assumption they’re all going to play ball. While some DNA companies like 23andMe are insisting on at least a subpoena before handing over access to DNA database search results, other companies are staying quiet about law enforcement access or specifically targeting law enforcement agencies with ads promising to help them work through their cold case files.

Consent is great, but it’s never going to be complete consent, no matter how FamilyTreeDNA shapes the argument. As Elizabeth Joh points out at Slate, there’s a whole lot of people involved who will never be asked for their consent once a customer agrees to allow DNA-matching sites to hand over their samples to law enforcement.

[W]hen you volunteer your DNA sample, you’re volunteering your genetic family tree, without having asked your parents, siblings, cousins, and distant cousins if they agree. That upends the usual way we think about providing information to law enforcement. You can’t give the police lawful consent to search your third cousin’s house, even if your third cousin (who you may never have met) is suspected of having been involved in a serious crime. Why are we allowing a distant relative to grant police permission to your DNA?

There’s no informed consent happening here. Customers are being treated as data points law enforcement can peruse at its leisure. A customer who agrees to be a good citizen (by clicking OK on a submission box on a private company’s website) may learn later their sample was used to lock up a close relative. Some people will be fine with this outcome. Others may regret being the critical piece of evidence used to incarcerate one of their relatives.

Whatever the case is, very few companies are being upfront about the effects of opening up database access to law enforcement. FamilyTreeDNA is using a crime victim’s parent and the founder’s Team Blue sympathies to hustle its customers towards compliance. Users who don’t like this turn of events will likely find it far more difficult to remove their DNA from FamilyTreeDNA’s database than simply hold their nose and become an willing part of this partnership.

Airbnb Has a Hidden-Camera Problem

Airbnb’s rules allow cameras outdoors and in living rooms and common areas, but never in bathrooms or anywhere guests plan to sleep, including rooms with foldout beds. Starting in early 2018, Airbnb added another layer of disclosure: If hosts indicate they have cameras anywhere on their property, guests receive a pop-up informing them where the cameras are located and where they are aimed. To book the property, the guests must click “agree,” indicating that they’re aware of the cameras and consent to being filmed.

Of course, hosts have plenty of reason to train cameras on the homes they rent out to strangers. They can catch guests who attempt to steal, or who trash the place, or who initially say they’re traveling alone, then show up to a property with five people. A representative for Airbnb’s Trust & Safety communications department told me the company tries to filter out hosts who may attempt to surveil guests by matching them against sex-offender and felony databases. The company also uses risk scores to flag suspicious behavior, in addition to reviewing and booting hosts with consistently poor scores.

If a guest contacts Airbnb’s Trust & Safety team with a complaint about a camera, employees offer new accommodations if necessary and open an investigation into the host. […] But four guests who found cameras in their rentals told The Atlantic the company has inconsistently applied its own rules when investigating their claims, providing them with incorrect information and making recommendations that they say risked putting them in harm’s way. “There have been super terrible examples of privacy violations by AirBnB hosts, e.g., people have found cameras hidden in alarm clocks in their bedrooms,” wrote Jeff Bigham, a computer-science professor at Carnegie Mellon whose claim was initially denied after he reported cameras in his rental. “I feel like our experience is in some ways more insidious. If you find a truly hidden camera in your bedroom or bathroom, Airbnb will support you. If you find an undisclosed camera in the private living room, Airbnb will not support you.”

Facebook Ad Platform Could Be Inherently Discriminatory, Researchers Say

Researchers from Northeastern Unviersity, the University of Southern Carolina, and tech accountability non-profit Upturn have released a paper that says Facebook’s ad delivery system itself can steer ads intended to be inclusive toward discrimination without explicit intent. “In a paper titled, ‘Discrimination through optimization: How Facebook’s ad delivery can lead to skewed outcomes,’ co-authors Muhammad Ali, Piotr Sapiezynski, Miranda Bogen, Aleksandra Korolova, Alan Mislove, and Aaron Rieke find that advertiser budgets and ad content affect ad delivery, skewing it along gender and racial lines even when neutral ad targeting settings are used,” reports The Register. From the report:
The researchers found that Facebook ads tend to be shown to men because women tend to click on ads more often, making them more expensive to reach through Facebook’s system. That divide becomes apparent when ad budgets are compared, because the ad budget affects ad distribution. As the paper explains, “the higher the daily budget, the smaller the fraction of men in the audience.” Such segregation may be appropriate and desirable for certain types of marketing pitches, but when applied to credit, employment and housing ads, the consequences can be problematic.

Ad content — text and images — also has a strong effect on whether ads get shown to men or women, even when the bidding strategy is the same and gender-agnostic targeting is used. In particular, the researchers found images had a surprisingly large effect on ad delivery. Ad URL destination has some effect — an ad pointing to a bodybuilding site and an ad pointing to a cosmetics site had a baseline delivery distribution of 48 percent men and 40 percent men respectively. The addition of a title and headline doesn’t change that much. But once the researchers added an image to the ad, the distribution pattern changed, with the bodybuilding site ad reaching an audience that was 75 percent male and the cosmetics ad reaching an audience that was 90 percent female. According to the researchers, their tests suggest, “Facebook has an automated image classification mechanism in place that is used to steer different ads towards different subsets of the user population.”

“In terms of credit, employment and housing ads, the problem with this system is that it discriminates where it shouldn’t: Five ads for lumber industry jobs were delivered to an audience that was more than 90 percent men and more than 70 percent white; five ads for janitorial work were delivered to an audience that was more than 65 percent women and 75 percent black,” the report adds. “Housing ads also showed a racial skew.”

The latest findings come after years of criticism of Facebook’s ad system. Last month, Facebook announced changes to the platform intended to prevent advertisers from deploying unfair credit, employment and housing ads. One week later, the U.S. Department of Housing and Urban Development sued Facebook for violating the Fair Housing Act.

Mark Zuckerberg Leveraged Facebook User Data To Fight Rivals and Help Friends, Leaked Documents Show

Facebook CEO Mark Zuckerberg once considered making deals with third-party developers just to help him find out how much users’ data is worth, NBC News reported on Tuesday. The report, which cites 4,000 leaked pages of internal documents, shines a light on the way senior company executives viewed attaching a dollar sign to sensitive user data, despite Facebook’s public commitment to protect such information. From the report:

In the same week, Zuckerberg floated the idea of pursuing 100 deals with developers “as a path to figuring out the real market value” of Facebook user data and then “setting a public rate” for developers. “The goal here wouldn’t be the deals themselves, but that through the process of negotiating with them we’d learn what developers would actually pay (which might be different from what they’d say if we just asked them about the value), and then we’d be better informed on our path to set a public rate,” Zuckerberg wrote in a chat. Facebook told NBC News that it was exploring ways to build a sustainable business, but ultimately decided not to go forward with these plans.

Zuckerberg was unfazed by the potential privacy risks associated with Facebook’s data-sharing arrangements. “I’m generally skeptical that there is as much data leak strategic risk as you think,” he wrote in the email to Lessin. “I think we leak info to developers but I just can’t think of any instances where that data has leaked from developer to developer and caused a real issue for us.”

The report also outlines how PR people at Facebook tries to spin things. An excerpt: In a March 2014 email discussing Zuckerberg’s keynote speech at the event, where he was due to announce the removal of developers’ access to friends’ data, Jonny Thaw, a director of communications, wrote that it “may be a tough message for some developers as it may inhibit their growth.” “So one idea that came up today was potentially talking in the keynote about some of the trust changes we’re making on Facebook itself. So the message would be: ‘trust is really important to us — on Facebook, we’re doing A, B and C to help people control and understand what they’re sharing — and with platform apps we’re doing D, E and F.'” If that doesn’t work, he added, “we could announce some of Facebook’s trust initiatives in the run up to F8” to make the changes for developers “seem more natural.”