Archives 2019

How Amazon’s Facial-Recognition Technology is Supercharging Local Police

Deputies in this corner of western Oregon outside ultraliberal Portland used to track down criminals the old-fashioned way, faxing caught-on-camera images of a suspect around the office in hope that someone might recognize the face. Then, in late 2017, the Washington County Sheriff’s Office became the first law enforcement agency in the country known to use Amazon’s artificial-intelligence tool Rekognition, transforming this thicket of forests and suburbs into a public testing ground for a new wave of experimental police surveillance techniques. Almost overnight, deputies saw their investigative powers supercharged, allowing them to scan for matches of a suspect’s face across more than 300,000 mug shots taken at the county jail since 2001. A grainy picture of someone’s face — captured by a security camera, a social media account or a deputy’s smartphone — can quickly become a link to their identity, including their name, family and address.

More than 1,000 facial-recognition searches were logged last year, said deputies, who sometimes used the results to find a suspect’s Facebook page or visit their home… “Just like any of our investigative techniques, we don’t tell people how we catch them,” said Robert Rookhuyzen, a detective on the agency’s major crimes team who said he has run “several dozen” searches and found it helpful about 75% of the time. “We want them to keep guessing…

But lawyers in Oregon said the technology should not be, as many see it, an imminent step forward for the future of policing, and they frame the system not as a technical milestone but a moral one: Is it OK to nab more bad guys if more good guys might get arrested, too? “People love to always say, âHey, if it’s catching bad people, great, who cares,’ ” said Joshua Crowther, a chief deputy defender in Oregon, “until they’re on the other end.”

How Much Do Amazon Deliveries Contribute To Global Warming?

It’s no coincidence that the number of trucks on the road has doubled since Amazon launched in 1994. That’s a huge deal for the climate, as Vox reported last year in an article on the environmental impact of online shopping: “In 2016, transportation overtook power plants as the top producer of carbon dioxide emissions in the US for the first time since 1979. Nearly a quarter of the transportation footprint comes from medium- and heavy-duty trucks. And increasingly the impact is coming in what people in the world of supply-chain logistics call ‘the last mile,’ meaning the final stretch from a distribution center to a package’s destination. (The ‘last mile’ can in truth be a dozen miles or more.)”

The good news is that e-commerce has the potential to be less carbon-intensive than brick-and-mortar retail. As Anne Goodchild, director of the University of Washington’s Supply Chain Transportation and Logistics Center, told BuzzFeed News, delivery trucks emit “between 20% and 75% less carbon dioxide per customer on average than passenger vehicles driving to [stores].” But that’s only if online stores choose the delivery times themselves. That way, they can pack trucks full of goods and optimize their routes. “When customers choose,” Goodchild noted, “the carbon savings are significantly smaller.”

Thus, Amazon could significantly cut its carbon footprint by prioritizing shipping optimization over consumer convenience…. Americans will have to begin thinking of Amazon.com and other e-commerce sites not as on-demand delivery services for every little thing, but stores that require just as much forethought as a trip to the mall did twenty years ago. And that might be too much to ask of the average consumer in the digital age. In which case, the government might have to step in.

Amazon’s biggest carbon impact comes from its AWS cloud servers, though by the end of 2018 they’d already converted 50% of that to renewable energy, according to the article. And more green efforts may be coming. “For the past eight years, Bezos has ignored requests from the nonprofit Carbon Disclosure Project, which collects the carbon footprint data of large corporations. But last month, he agreed to eventually make the company’s emissions data public. It’s expected later this year.”

The article also raises the possibility of a future with delivery drones powered by renewable energy. But it adds tht until that day comes, expecting deliveries within 48 hours “is incompatible with solving global warming.”

Uber Admits It Wants To Take Riders Away From Public Transit

“Uber took down the taxi industry and now it wants a piece of public transit,” reports CNN, in an article shared by dryriver:
For years, as it aggressively entered new markets, Uber has maintained that it is a complement and ally of public transit. But that messaging changed earlier this month, when Uber released its S-1 ahead of its upcoming initial public offering. In the regulatory filing, Uber said its growth depends on better competing with public transportation, which it identifies as a $1 trillion market it can grab a share of over the long-term. Uber, which lost $1.8 billion in 2018, said it offers incentives to drivers to scale up its network to attract riders away from personal vehicles and public transportation.

Transportation experts say that if Uber grabs a big chunk of its target market — 4.4 trillion passenger miles on public transportation in the 63 countries in which it operates — cities would grind to a halt, as there would literally be no space to move on streets….

Uber’s rival Lyft didn’t describe public transportation as a competitor in its S-1. But while the corporate mission may be different, in practice there’s little difference, experts say.

“Try to imagine the island of Manhattan, and everyone taking the subway being in a rideshare. It just doesn’t function….” said Christof Spieler, who teaches transportation at Rice University and wrote the book Trains, Buses, People. “It’s a world in which large cities essentially break down.”

And transportation consultant Jarrett Walker tells CNN that while it may make business sense for Uber and Lyft to pursue this strategy, “it may also be a strategy that’s destroying the world.”

‘They’re Basically Lying’ – Mental Health Apps Caught Secretly Sharing Data

“Free apps marketed to people with depression or who want to quit smoking are hemorrhaging user data to third parties like Facebook and Google — but often don’t admit it in their privacy policies, a new study reports…” writes The Verge.

“You don’t have to be a user of Facebook’s or Google’s services for them to have enough breadcrumbs to ID you,” warns Slashdot schwit1. From the article:
By intercepting the data transmissions, they discovered that 92 percent of the 36 apps shared the data with at least one third party — mostly Facebook- and Google-run services that help with marketing, advertising, or data analytics. (Facebook and Google did not immediately respond to requests for comment.) But about half of those apps didn’t disclose that third-party data sharing, for a few different reasons: nine apps didn’t have a privacy policy at all; five apps did but didn’t say the data would be shared this way; and three apps actively said that this kind of data sharing wouldn’t happen. Those last three are the ones that stood out to Steven Chan, a physician at Veterans Affairs Palo Alto Health Care System, who has collaborated with Torous in the past but wasn’t involved in the new study. “They’re basically lying,” he says of the apps.

Part of the problem is the business model for free apps, the study authors write: since insurance might not pay for an app that helps users quit smoking, for example, the only ways for free app developer to stay afloat is to either sell subscriptions or sell data. And if that app is branded as a wellness tool, the developers can skirt laws intended to keep medical information private.

A few apps even shared what The Verge calls “very sensitive information” like self reports about substance use and user names.

10% of Twitter Users Create 80% of the Tweets

In the United States, 10% of Twitter’s users create 80% of its tweets, according to a new study by the Pew Research Center:
The median user in the top 10% by tweet volume creates 138 tweets per month, “favorites” 70 posts per month, follows 456 accounts, and has 387 followers. By comparison, the median user in the bottom 90% of tweeters creates just two tweets per month, “favorites” one post per month, follows 74 accounts, and has 19 followers. And when asked to report how often they use the platform, fully 81% of these highly active tweeters say they do so every day; 47% of other Twitter users visit the platform with this regularity…

Twitter users also tend to have higher levels of household income and educational attainment relative to the general adult population. Some 42% of adult Twitter users have at least a bachelor’s degree — 11 percentage points higher than the overall share of the public with this level of education (31%). Similarly, the number of adult Twitter users reporting a household income above $75,000 is 9 points greater than the same figure in the general population: 41% vs. 32%.

Hacker Can Monitor Cars And Kill Their Engines After Breaking Into GPS Tracking Apps

A hacker broke into thousands of accounts belonging to users of two GPS tracker apps, giving him the ability to monitor the locations of tens of thousands of vehicles and even turn off the engines for some of them while they were in motion. The hacker, who goes by the name L&M, told Motherboard he hacked into more than 7,000 iTrack accounts and more than 20,000 ProTrack accounts, two apps that companies use monitor and manage fleets of vehicles through GPS tracking devices. The hacker was able to track vehicles in a handful of countries around the world, including South Africa, Morocco, India, and the Philippines. On some cars, the software has the capability of remotely turning off the engines of vehicles that are stopped or are traveling 12 miles per hour or slower, according to the manufacturer of certain GPS tracking devices.

By reverse engineering ProTrack and iTrack’s Android apps, L&M said he realized that all customers are given a default password of 123456 when they sign up. At that point, the hacker said he brute-forced ‘millions of usernames’ via the apps’ API. Then, he said he wrote a script to attempt to login using those usernames and the default password. This allowed him to automatically break into thousands of accounts that were using the default password and extract data from them.

Applying For Your Next Job May Be an Automated Nightmare

If you think looking for a job is already daunting, anxiety-riddled, and unpleasant, just wait until the algorithms take over the hiring process. When they do, a newfangled ‘digital recruiter’ like VCV, which just received $1.7 million in early investment, hopes it will look something like this: First, a search bot will be used to scan CVs by the thousands, yours presumably among them. If it’s picked out of the haystack, you will be contacted by a chatbot. Over SMS, the bot will set an appointment for a phone interview, which will be conducted by an automated system enabled by voice recognition AI. Next, the system will ask you, the applicant, to record video responses to a set of predetermined interview questions. Finally, the program can use facial recognition and predictive analytics to complete the screening, algorithmically determining whether the nervousness, mood, and behavior patterns you exhibit make you a fit for the company. If you pass all that, then you will be recommended for an in-person job interview.

[…] VCV, which did not respond to a request for comment, is far from alone here. A growing suite of startups is pitching AI-driven recruitment services, promising to save corporations millions of dollars throughout the hiring process by reducing overhead, to pluck more ideal candidates out of obscurity, and to reduce bias in the hiring process. Most offer little to no evidence of how they actually do so. VCV’s much-larger competitor, HireVue, which has raked in a staggering $93 million in funding and is backed by top-tier Silicon Valley venture capital firms like Sequoia, is hocking many of the same services. It counts 700 companies as its clients, including, it says, Urban Outfitters, Intel, Honeywell, and Unilever. AllyO, which was founded in 2015, and “utilizes deep workflow conversational AI to fully automate end to end recruiting workflow” has $19 million in backing.

The Feds Are Dropping Child Porn Cases Instead of Revealing Their Surveillance Systems

The Department of Justice has been dismissing child pornography cases in order to not reveal information about the software programs used as the basis for the charges. An array of cases suggest serious problems with the tech tools used by federal authorities. But the private entities who developed these tools won’t submit them for independent inspection or hand over hardly any information about how they work, their error rates, or other critical information. As a result, potentially innocent people are being smeared as pedophiles and prosecuted as child porn collectors, while potentially guilty people are going free so these companies can protect “trade secrets.” The situation suggests some of the many problems that can arise around public-private partnerships in catching criminals and the secretive digital surveillance software that it entails (software that’s being employed for far more than catching child predators).

With the child pornography cases, “the defendants are hardly the most sympathetic,” notes Tim Cushing at Techdirt. Yet that’s all the more reason why the government’s antics here are disturbing. Either the feds initially brought bad cases against people whom they just didn’t think would fight back, or they’re willing to let bad behavior go rather than face some public scrutiny. An extensive investigation by ProPublica “found more than a dozen cases since 2011 that were dismissed either because of challenges to the software’s findings, or the refusal by the government or the maker to share the computer programs with defense attorneys, or both,” writes Jack Gillum. Many more cases raised issues with the software as a defense. “Defense attorneys have long complained that the government’s secrecy claims may hamstring suspects seeking to prove that the software wrongly identified them,” notes Gillum. “But the growing success of their counterattack is also raising concerns that, by questioning the software used by investigators, some who trade in child pornography can avoid punishment.”

Scientists Have Developed a Brain Implant That Can Read People’s Minds

The team at the University of California, San Francisco says the technology is “exhilarating.” They add that their findings, published in the journal Nature, could help people when disease robs them of their ability to talk. The mind-reading technology works in two stages. First an electrode is implanted in the brain to pick up the electrical signals that maneuver the lips, tongue, voice box and jaw. Then powerful computing is used to simulate how the movements in the mouth and throat would form different sounds. This results in synthesized speech coming out of a “virtual vocal tract.”

Why Energy Is A Big And Rapidly Growing Problem For Data Centers

U.S. data centers use more than 90 billion kilowatt-hours of electricity a year, requiring roughly 34 giant (500-megawatt) coal-powered plants. Global data centers used roughly 416 terawatts (4.16 x 1014 watts) (or about 3% of the total electricity) last year, nearly 40% more than the entire United Kingdom. And this consumption will double every four years.

Streaming video has already changed the game, but the explosion of artificial intelligence and internet-connected devices will change the entire landscape. AI is the future, and AI is hungry for processing power. IoT is projected to exceed 20 billion devices by 2020 (some analysts believe we will reach that number this year alone). Given there are currently 10 billion internet-connected devices, doubling that to 20 billion will require massive increases to our data center infrastructure, which will massively increase our electricity consumption.

How on earth can we possibly build all the power plants required to supply electricity to twice as many data centers in the next four years? The simple answer is that we can’t.

Facial Recognition to board a plane

A boarding technology for travelers using JetBlue is causing controversy due to a social media thread on the airline’s use of facial recognition. Last week, traveler MacKenzie Fegan described her experience with the biometric technology in a social media post that got the attention of JetBlue’s official account. She began: “I just boarded an international @JetBlue flight. Instead of scanning my boarding pass or handing over my passport, I looked into a camera before being allowed down the jet bridge. Did facial recognition replace boarding passes, unbeknownst to me? Did I consent to this?” JetBlue was ready to offer Twitterized sympathy: “You’re able to opt out of this procedure, MacKenzie. Sorry if this made you feel uncomfortable.”

But once you start thinking about these things, your thoughts become darker. Fegan wanted to know how JetBlue knew what she looked like. JetBlue explained: “The information is provided by the United States Department of Homeland Security from existing holdings.” Fegan wondered by what right a private company suddenly had her bioemtric data. JetBlue insisted it doesn’t have access to the data. It’s “securely transmitted to the Customs and Border Protection database.” Fegan wanted to know how this could have possibly happened so quickly. Could it be that in just a few seconds her biometric data was whipped “securely” around government departments so that she would be allowed on the plane? JetBlue referred her to an article on the subject, which was a touch on the happy-PR side. Fegan was moved, but not positively, by the phrase “there is no pre-registration required.”

Google Home calls the Police, always listening

According to ABC News, officers were called to a home outside Albuquerque, New Mexico this week when a Google Home called 911 and the operator heard a confrontation in the background. Police say that Eduardo Barros was house-sitting at the residence with his girlfriend and their daughter. Barros allegedly pulled a gun on his girlfriend when they got into an argument and asked her: “Did you call the sheriffs?” Google Home apparently heard “call the sheriffs,” and proceeded to call the sheriffs. A SWAT team arrived at the home and after negotiating for hours, they were able to take Barros into custody… “The unexpected use of this new technology to contact emergency services has possibly helped save a life,” Bernalillo County Sheriff Manuel Gonzales III said in a statement.

“It’s easy to imagine police getting tired of being called to citizen’s homes every time they watch the latest episode of Law and Order,” quips Gizmodo. But they also call the incident “a clear reminder that smart home devices are always listening.”

Stare Into The Lights My Pretties

Our phones make us feel like social-media activists, but they’re actually turning us into bystanders

On April 9, 2017, a video of a man being dragged off a United Airlines flight was posted on the internet and went viral. But I don’t need to tell you that. Each of your most outspoken Facebook friends probably posted about the event, highlighting the aspects of it that best reinforced their worldview. The incident was covered all over American media and even sparked outrage in China.

The collective focus may have now moved on to its next source of outrage, but there was something that only a few people noticed in the moment: a plane full of quiet passengers. Other than one woman screaming, hardly anyone else on the plane seemed bothered enough by what was happening to raise a ruckus. This calm scene is a rather unlikely precursor to the uproar that unfolded hours later on Facebook and Twitter.

Instead of intervening in the assault, the passengers stoically took out their cameraphones and pointed them toward David Dao, whose body was dragged along the aisle of the airplane, glasses askew, face bloody, and belly exposed. Their immediate response was not to speak out against the outrageousness of what was going on, but to create an instant digital record of the incident.

The act of recording a violent event but staying silent is a modern manifestation of the bystander effect. The bystander effect occurs when people refrain from intervening in an emergency situation because there are other people around. Psychologists Bibb Latané and John Darley, who first demonstrated the bystander effect, attributed this phenomenon to two factors: a perceived diffusion of responsibility (thinking that someone else in the group will help) and social influence (where observers see the inaction of the group as evidence that there is no reason to intervene).

Our cameraphones may make us feel like social-media activists, but when we’re recording an event instead of intervening, we’re actually just real-world bystanders. There is a gulf of dissonance between what we publicly declare as our values—online or otherwise—and how we act.

In the past few years, there have been scores of videos depicting abuse that have been recorded and then disseminated online. In New Jersey in 2014, people watched and recorded as a woman was punched and kicked by a co-worker. (The only one who said anything was her 2-year-old child, who knew, naturally, to help.) In Philadelphia in 2016, a man was beating and punching a woman in the streets while an observer videotaped the event. Even without violence, the temptation to be a recording bystander prevails. Take the case of a 2013 fire in Pincourt, Canada, where observers recorded the house burning to the ground from all angles—but nobody called the fire station.

To prevent a culture of disembodied bystanders, we must learn to better asses the appropriate actions when we’re in a situation that demands immediate attention. In doing so, we hopefully transcend the idea that recording an event is a replacement for action.

Sam Gregory is a program director at WITNESS, a global organization that incorporates video technology into human-rights advocacy. The goal of Gregory’s primary project, Mobil-Eyes-Us, is to find ways to translate “co-presence” in to action. “In these types of events, people do freeze,” Gregory says. “The goal is to get over the freeze reaction.”

Filming events doesn’t relinquish our moral responsibility to intervene, but Gregory believes it’s “a step up from the Kitty Genovese incident,” which was an infamous 1964 stabbing in Queens, New York that 38 neighbors observed over a half hour, but none of them called the police or stepped in to intervene. If those 38 people lived in an age of smartphones, you can safely bet what a large portion of them would be doing.

Gregory says the idea of his project is to develop “witnessing literacy:” a repertoire of actions people can take in order to prevent unethical witnessing. To that end, the WITNESS website has abundant resources and guides, from teaching observers how to capture and preserve video as evidence to how to protect your identity on Youtube. The organization has also produced a mini-guide to capturing ethical footage and a video showing how to share the United Airlines video in a way that would protect the victim, David Dao:

This said, documenting an event is only a viable contribution to an inclement situation if it is then used in an ethical manner; it’s not the recording that matters, it’s what you do with it. For example, a video of an assault on your phone helps no one if it’s not formally filed to the police or uploaded to the internet in an effective, ethical manner. And with all that effort, wouldn’t it have been better to try and pipe-up in the moment? (If all else fails, you might also try to sing, which is what this one brave woman did to fend off a man harassing a woman on public transport.)

Viral videos that incite outrage and prod at our sense of justice demonstrate both the difficulty and necessity of acting in accordance with our values. We argue so much online about the actions of people who we do not know and will never meet, and this takes time away from looking at our own actions and preparing ourselves to act better in similar situations. As we thank the one woman on the plane who dared to speak up on the United flight, we should consider what else productive protest looks like so that each of us has a repertoire of counter-violent actions to take.

For now, those of us who wish to believe in a world where people look out for each other will have to take it upon themselves to lead by example. We should learn how to translate our digital frustrations to analog action.

Microsoft Turned Down Facial-Recognition Sales over “Human Rights Concerns”

Microsoft recently rejected a California law enforcement agency’s request to install facial recognition technology in officers’ cars and body cameras due to human rights concerns, company President Brad Smith said on Tuesday. Microsoft concluded it would lead to innocent women and minorities being disproportionately held for questioning because the artificial intelligence has been trained on mostly white and male pictures. AI has more cases of mistaken identity with women and minorities, multiple research projects have found.

Smith explained the decisions as part of a commitment to human rights that he said was increasingly critical as rapid technological advances empower governments to conduct blanket surveillance, deploy autonomous weapons and take other steps that might prove impossible to reverse. Smith also said at a Stanford University conference that Microsoft had declined a deal to install facial recognition on cameras blanketing the capital city of an unnamed country that the nonprofit Freedom House had deemed not free. Smith said it would have suppressed freedom of assembly there.

On the other hand, Microsoft did agree to provide the technology to an American prison, after the company concluded that the environment would be limited and that it would improve safety inside the unnamed institution.

Chinese companies using GPS tracking device smartwatches to monitor, alert street cleaners

Street cleaners in parts of China are reportedly being forced to wear GPS-tracking smartwatches so employers can monitor how hard they work, sparking public outrage and concern over increasing mass surveillance across the country.

If the smartwatch detects a worker standing still for over 20 minutes, it sounds an alarm. “Add oil, add oil [work harder, work harder!],” the wristbands’ alarm says, several cleaners from the eastern city of Nanjing told Jiangsu Television earlier this month.

The smartwatch not only tracks the cleaners’ locations but also reports their activity back to the company’s control room, where a big screen displays their locations as a cluster of red dots on a map.

“It knows everything,” an anonymous cleaner told a reporter in the Jiangsu Television report. “Supervisors will come if we don’t move after hearing the alarm.”

Following backlash, the company said it removed the alarm function from the smartwatch, but reports maintain the employees are still being required to wear the device so their location can be tracked.

The Chinese Government is already in the process of building a Social Credit System aimed at monitoring the behaviour of its 1.4 billion citizens with the help an extensive network of CCTV cameras and facial recognition technology.

Senior researcher for Human Rights Watch China Maya Wang said the use of surveillance technology by the Government was sending private companies a message that it was “okay to [monitor] people”.

Tenants Outraged Over New York Landlord’s Plan To Install Facial Recognition Technology

A Brooklyn landlord plans to install facial recognition technology at the entrance of a 700-unit building, according to Gothamist, “raising alarm among tenants and housing rights attorneys about what they say is a far-reaching and egregious form of digital surveillance.”

[Last] Sunday, several tenants told Gothamist that, unbeknownst to them, their landlord, Nelson Management, had sought state approval in July 2018 to install a facial recognition system known as StoneLock. Under state rules, landlords of rent-regulated apartments built before 1974 must seek permission from the state’s Homes and Community Renewal (HCR) for any “modification in service.” Tenants at the two buildings, located at 249 Thomas S. Boyland Street and 216 Rockaway Avenue, said they began receiving notices about the system in the fall. According to its website, Kansas-based company StoneLock offers a “frictionless” entry system that collects biometric data based on facial features. “We don’t want to be tracked,” said Icemae Downes, a longtime tenant. “We are not animals. This is like tagging us through our faces because they can’t implant us with a chip.”

It is not clear how many New York City apartments are using facial scanning software or how such technology is being regulated. But in a sign of the times, the city’s Department of Housing Preservation and Development last June began marketing 107 affordable units at a new apartment complex in the South Bronx. Among the amenities listed was “State of the Art Facial Recognition Building Access….” Across the real estate industry, New York City landlords have increasingly been moving to keyless entry systems, citing convenience as well as a desire to offer enhanced security. Over the years, in response to appeals filed by tenants, HCR has ruled in favor of key fob and card entry systems, saying that such substitutions did not violate rent-stabilization and rent-control laws. But the latest technology has triggered even more concerns about the ethics of data collection….

Last month, the management company reached out to a group of tenants to assuage their concerns about StoneLock. But tenants said the presentation, if anything, only deepened their fears that they were being asked to submit to a technology that had very little research behind it.

“This was not something we asked for at any given time,” one tenant complaint, while one of the attorneys representing the tenants said that, among other things, their landlord had “made no assurances to protect the data from being accessed by NYPD, ICE, or any other city, state, or federal agency.”

“Citing concerns over the potential for privacy and civil liberties violations, tenants at Brownsville’s Atlantic Plaza Towers filed an objection to the plan in January…”

We’re All Being Judged By a Secret ‘Trustworthiness’ Score

Nearly everything we buy, how we buy, and where we’re buying from is secretly fed into AI-powered verification services that help companies guard against credit-card and other forms of fraud, according to the Wall Street Journal.

More than 16,000 signals are analyzed by a service called Sift, which generates a “Sift score” ranging from 1 to 100. The score is used to flag devices, credit cards and accounts that a vendor may want to block based on a person or entity’s overall “trustworthiness” score, according to a company spokeswoman.

From the Sift website: “Each time we get an event be it a page view or an API event we extract features related to those events and compute the Sift Score. These features are then weighed based on fraud we’ve seen both on your site and within our global network, and determine a user’s Score. There are features that can negatively impact a Score as well as ones which have a positive impact.”

The system is similar to a credit score except there’s no way to find out your own Sift score.

Facebook is Demanding Some Users Share the Password For Their Outside Email Account

Just two weeks after admitting it stored hundreds of millions of its users’ own passwords insecurely, Facebook is demanding some users fork over the password for their outside email account as the price of admission to the social network. Facebook users are being interrupted by an interstitial demanding they provide the password for the email account they gave to Facebook when signing up. “To continue using Facebook, you’ll need to confirm your email,” the message demands. “Since you signed up with [email address], you can do that automatically …” A form below the message asked for the users’ “email password.”

“That’s beyond sketchy,” security consultant Jake Williams told the Daily Beast. “They should not be taking your password or handling your password in the background. If that’s what’s required to sign up with Facebook, you’re better off not being on Facebook.” In a statement emailed to the Daily Beast after this story published, Facebook reiterated its claim it doesn’t store the email passwords. But the company also announced it will end the practice altogether. “We understand the password verification option isn’t the best way to go about this, so we are going to stop offering it,” Facebook wrote. It’s not clear how widely the new measure was deployed, but in its statement Facebook said users retain the option of bypassing the password demand and activating their account through more conventional means, such as “a code sent to their phone or a link sent to their email.” Those options are presented to users who click on the words “Need help?” in one corner of the page.

Fake Cancerous Nodes in CT Scans, Created By Malware, Trick Radiologists

Researchers in Israel say they have developed malware to draw attention to serious security weaknesses in critical medical imaging equipment used for diagnosing conditions and the networks that transmit those images — vulnerabilities that could have potentially life-altering consequences if unaddressed. The malware they created would let attackers automatically add realistic, malignant-seeming growths to CT or MRI scans before radiologists and doctors examine them. Or it could remove real cancerous nodules and lesions without detection, leading to misdiagnosis and possibly a failure to treat patients who need critical and timely care.

Yisroel Mirsky, Yuval Elovici and two others at the Ben-Gurion University Cyber Security Research Center in Israel who created the malware say that attackers could target a presidential candidate or other politicians to trick them into believing they have a serious illness and cause them to withdraw from a race to seek treatment. The research isn’t theoretical. In a blind study the researchers conducted involving real CT lung scans, 70 of which were altered by their malware, they were able to trick three skilled radiologists into misdiagnosing conditions nearly every time. In the case of scans with fabricated cancerous nodules, the radiologists diagnosed cancer 99 percent of the time. In cases where the malware removed real cancerous nodules from scans, the radiologists said those patients were healthy 94 percent of the time.

FamilyTreeDNA Deputizes Itself, Starts Pitching DNA Matching Services To Law Enforcement

One DNA-matching company has decided it’s going to corner an under-served market: US law enforcement. FamilyTreeDNA — last seen here opening up its database to the FBI without informing its users first — is actively pitching its services to law enforcement.

FamilyTreeDNA sounds like it’s finally going to seek consent from its customers, but only after having abused their trust once and under the assumption they’re all going to play ball. While some DNA companies like 23andMe are insisting on at least a subpoena before handing over access to DNA database search results, other companies are staying quiet about law enforcement access or specifically targeting law enforcement agencies with ads promising to help them work through their cold case files.

Consent is great, but it’s never going to be complete consent, no matter how FamilyTreeDNA shapes the argument. As Elizabeth Joh points out at Slate, there’s a whole lot of people involved who will never be asked for their consent once a customer agrees to allow DNA-matching sites to hand over their samples to law enforcement.

[W]hen you volunteer your DNA sample, you’re volunteering your genetic family tree, without having asked your parents, siblings, cousins, and distant cousins if they agree. That upends the usual way we think about providing information to law enforcement. You can’t give the police lawful consent to search your third cousin’s house, even if your third cousin (who you may never have met) is suspected of having been involved in a serious crime. Why are we allowing a distant relative to grant police permission to your DNA?

There’s no informed consent happening here. Customers are being treated as data points law enforcement can peruse at its leisure. A customer who agrees to be a good citizen (by clicking OK on a submission box on a private company’s website) may learn later their sample was used to lock up a close relative. Some people will be fine with this outcome. Others may regret being the critical piece of evidence used to incarcerate one of their relatives.

Whatever the case is, very few companies are being upfront about the effects of opening up database access to law enforcement. FamilyTreeDNA is using a crime victim’s parent and the founder’s Team Blue sympathies to hustle its customers towards compliance. Users who don’t like this turn of events will likely find it far more difficult to remove their DNA from FamilyTreeDNA’s database than simply hold their nose and become an willing part of this partnership.