Resources

FTC Should Probe Payroll Data Deals By Brokers Like Equifax

For decades, companies such as Equifax have acquired employee work histories and compensation data from employers to help lenders, landlords, hiring managers and other customers perform background checks of individuals. But these big databases are vulnerable to theft and error, and workers are sometimes surprised their records are included, according to privacy activists. Equifax said it follows all laws and welcomes additional voices in the industry. In the letter to the FTC, San Mateo, California-based startup Certree said that Equifax and Experian are providing financial incentives like a slice of their revenue to employers to gain exclusive access to payroll data. Equifax also has deals with payroll software vendors that help employers process paychecks. The letter describes the agreements as anticompetitive and potentially unlawful.

134

Private investigators call for people to contribute their DNA to public database

Last month DNA-based investigations also led to the arrest of the suspected murderer of two vacationers in 1987, and helped identify a suicide cold case from 2001. Emboldened by that breakthrough, a number of private investigators are spearheading a call for amateur genealogists to help solve other cold cases by contributing their own genetic information to the same public database. They say a larger array of genetic information would widen the pool to find criminals who have eluded capture. The idea is to get people to transfer profiles compiled by commercial genealogy sites such as Ancestry.com and 23andMe onto the smaller, public open-source database created in 2010, called GEDmatch. The commercial sites require authorities to obtain search warrants for the information; the public site does not.

573

Google and Facebook are watching our every move online

You may know that hidden trackers lurk on most websites you visit, soaking up your personal information. What you may not realize, though, is 76 percent of websites now contain hidden Google trackers, and 24 percent have hidden Facebook trackers, according to the Princeton Web Transparency & Accountability Project. The next highest is Twitter with 12 percent. It is likely that Google or Facebook are watching you on many sites you visit, in addition to tracking you when using their products. As a result, these two companies have amassed huge data profiles on each person, which can include your interests, purchases, search, browsing and location history, and much more. They then make your sensitive data profile available for invasive targeted advertising that can follow you around the Internet.

So how do we move forward from here? Don’t be fooled by claims of self-regulation, as any useful long-term reforms of Google and Facebook’s data privacy practices fundamentally oppose their core business models: hyper-targeted advertising based on more and more intrusive personal surveillance. Change must come from the outside. Unfortunately, we’ve seen relatively little from Washington. Congress and federal agencies need to take a fresh look at what can be done to curb these data monopolies. They first need to demand more algorithmic and privacy policy transparency, so people can truly understand the extent of how their personal information is being collected, processed and used by these companies. Only then can informed consent be possible. They also need to legislate that people own their own data, enabling real opt-outs. Finally, they need to restrict how data can be combined including being more aggressive at blocking acquisitions that further consolidate data power, which will pave the way for more competition in digital advertising. Until we see such meaningful changes, consumers should vote with their feet.

738
Stare Into The Lights My Pretties

Across the United States, police officers abuse confidential databases

“Police officers across the country misuse confidential law enforcement databases to get information on romantic partners, business associates, neighbors, journalists and others for reasons that have nothing to do with daily police work, an Associated Press investigation has found.
[…]In the most egregious cases, officers have used information to stalk or harass, or have tampered with or sold records they obtained.
[…]Unspecified discipline was imposed in more than 90 instances reviewed by AP. In many other cases, it wasn’t clear from the records if punishment was given at all. The number of violations was surely far higher since records provided were spotty at best, and many cases go unnoticed.

Among those punished: an Ohio officer who pleaded guilty to stalking an ex-girlfriend and who looked up information on her; a Michigan officer who looked up home addresses of women he found attractive; and two Miami-Dade officers who ran checks on a journalist after he aired unflattering stories about the department.

”It’s personal. It’s your address. It’s all your information, it’s your Social Security number, it’s everything about you,” said Alexis Dekany, the Ohio woman whose ex-boyfriend, a former Akron officer, pleaded guilty last year to stalking her. “And when they use it for ill purposes to commit crimes against you — to stalk you, to follow you, to harass you … it just becomes so dangerous.”

The misuse represents only a tiny fraction of the millions of daily database queries run legitimately during traffic stops, criminal investigations and routine police encounters. But the worst violations profoundly abuses systems that supply vital information on criminal suspects and law-abiding citizens alike. The unauthorized searches demonstrate how even old-fashioned policing tools are ripe for abuse, at a time when privacy concerns about law enforcement have focused mostly on more modern electronic technologies.”

751

FBI and NIST developing software to track and categorise people by their tattoos

“An Electronic Frontier Foundation (EFF) investigation just revealed an awfully Orwellian fact: the FBI is working with government researchers to develop advanced tattoo recognition technology. This would allow law enforcement to sort and identify people based on their tattoos to determine “affiliation to gangs, sub-cultures, religious or ritualistic beliefs, or political ideology.”

659

Google AI has access to 1.6M people’s health records (UK)

“A document obtained by New Scientist reveals that the tech giant’s collaboration with the UK’s National Health Service goes far beyond what has been publicly announced. The document — a data-sharing agreement between Google-owned artificial intelligence company DeepMind and the Royal Free NHS Trust — gives the clearest picture yet of what the company is doing and what sensitive data it now has access to. The agreement gives DeepMind access to a wide range of healthcare data on the 1.6 million patients who pass through three London hospitals.

It includes logs of day-to-day hospital activity, such as records of the location and status of patients – as well as who visits them and when. The hospitals will also share the results of certain pathology and radiology tests.

As well as receiving this continuous stream of new data, DeepMind has access to the historical data that the Royal Free trust submits to the Secondary User Service (SUS) database – the NHS’s centralised record of all hospital treatments in the UK. This includes data from critical care and accident and emergency departments.

Google says it has no commercial plans for DeepMind’s work with Royal Free and that the current pilots are being done for free. But the data to which Royal Free is giving DeepMind access is hugely valuable. It may have to destroy its copy of the data when the agreement expires next year, but that gives ample time to mine it for health insights.”

659

How Big Data Creates False Confidence

“The general idea is to find datasets so enormous that they can reveal patterns invisible to conventional inquiry… But there’s a problem: It’s tempting to think that with such an incredible volume of data behind them, studies relying on big data couldn’t be wrong. But the bigness of the data can imbue the results with a false sense of certainty. Many of them are probably bogus — and the reasons why should give us pause about any research that blindly trusts big data.”

For example, Google’s database of scanned books represents 4% of all books ever published, but in this data set, “The Lord of the Rings gets no more influence than, say, Witchcraft Persecutions in Bavaria.” And the name Lanny appears to be one of the most common in early-20th century fiction — solely because Upton Sinclair published 11 different novels about a character named Lanny Budd.

The problem seems to be skewed data and misinterpretation. (The article points to the failure of Google Flu Trends, which it turns out “was largely predicting winter”.) The article’s conclusion? “Rather than succumb to ‘big data hubris,’ the rest of us would do well to keep our sceptic hats on — even when someone points to billions of words.”

688