Resources

Google is Giving Data To Police Based on Search Keywords, Court Docs Show

There are few things as revealing as a person’s search history, and police typically need a warrant on a known suspect to demand that sensitive information. But a recently unsealed court document found that investigators can request such data in reverse order by asking Google to disclose everyone who searched a keyword rather than for information on a known suspect.

In August, police arrested Michael Williams, an associate of singer and accused sex offender R. Kelly, for allegedly setting fire to a witness’ car in Florida. Investigators linked Williams to the arson, as well as witness tampering, after sending a search warrant to Google that requested information on “users who had searched the address of the residence close in time to the arson.”

The July court filing was unsealed on Tuesday. Detroit News reporter Robert Snell tweeted about the filing after it was unsealed. Court documents showed that Google provided the IP addresses of people who searched for the arson victim’s address, which investigators tied to a phone number belonging to Williams. Police then used the phone number records to pinpoint the location of Williams’ device near the arson, according to court documents. The original warrant sent to Google is still sealed, but the report provides another example of a growing trend of data requests to the search engine giant in which investigators demand data on a large group of users rather than a specific request on a single suspect. “This ‘keyword warrant’ evades the Fourth Amendment checks on police surveillance,” said Albert Fox Cahn, the executive director of the Surveillance Technology Oversight Project. “When a court authorizes a data dump of every person who searched for a specific term or address, it’s likely unconstitutional.”

427

Police Are Using Facial Recognition For Minor Crimes, ‘Because They Can’

In a recent court filing, the New York police department noted that it’s turned to facial recognition in more than 22,000 cases in the last three years. “Even though the NYPD claims facial recognition is only used for serious crimes, the numbers tell a different story,” said Albert Fox Cahn, the executive director of the Surveillance Technology Oversight Project. “As facial recognition continues to grow, it’s being routinely deployed for everything from shoplifting to graffiti.”

Asked for comment, an NYPD spokeswoman pointed to a 2019 opinion article by police commissioner James O’Neill titled “How Facial Recognition Makes You Safer.” In the piece, O’Neill talked about how facial recognition had been used to make arrests in murder, robbery and rape cases, but he didn’t disclose how often it was used for low-level crimes. The department’s facial recognition policy, established in March, allows the technology to be used for any crime, no matter the severity. Without any limits, police have more frequently used the technology for petty thefts than the dangerous crimes, privacy advocates say. Before Amazon put a moratorium on police use of its Rekognition face-identifying software, the program was used in a $12 shoplifting case in Oregon in 2018…

Without any limits, police can use facial recognition however they please, and in many cases, arrested suspects don’t even know that the flawed technology was used… Attorneys representing protesters in Miami didn’t know that police used facial recognition in their arrests, according to an NBC Miami report. Police used facial recognition software in a $50 drug dealing case in Florida in 2016 but made no mention of it in the arrest report.

The article also notes that as recently as this Tuesday, Hoan Ton-That, the CEO of facial recognition startup Clearview AI “said it isn’t the company’s responsibility to make sure its technology is being properly used by its thousands of police partners.

“Though the company has its own guidelines, Ton-That said Clearview AI wouldn’t be enforcing them, saying that ‘it’s not our job to set the policy as a tech company…'”

442

Police complain about surveillance “going dark” but they are frequently breaking encryption far more than previously known

In a new Apple ad, a man on a city bus announces he has just shopped for divorce lawyers. Then a woman recites her credit card number through a megaphone in a park. “Some things shouldn’t be shared,” the ad says, “iPhone helps keep it that way.” Apple has built complex encryption into iPhones and made the devices’ security central to its marketing pitch. That, in turn, has angered law enforcement. Officials from the F.B.I. director to rural sheriffs have argued that encrypted phones stifle their work to catch and convict dangerous criminals. They have tried to force Apple and Google to unlock suspects’ phones, but the companies say they can’t. In response, the authorities have put their own marketing spin on the problem. Law enforcement, they say, is “going dark.” Yet new data reveals a twist to the encryption debate that undercuts both sides: Law enforcement officials across the nation regularly break into encrypted smartphones.

That is because at least 2,000 law enforcement agencies in all 50 states now have tools to get into locked, encrypted phones and extract their data, according to years of public records collected in a report by Upturn, a Washington nonprofit that investigates how the police use technology. At least 49 of the 50 largest U.S. police departments have the tools, according to the records, as do the police and sheriffs in small towns and counties across the country, including Buckeye, Ariz.; Shaker Heights, Ohio; and Walla Walla, Wash. And local law enforcement agencies that don’t have such tools can often send a locked phone to a state or federal crime lab that does. With more tools in their arsenal, the authorities have used them in an increasing range of cases, from homicides and rapes to drugs and shoplifting, according to the records, which were reviewed by The New York Times. Upturn researchers said the records suggested that U.S. authorities had searched hundreds of thousands of phones over the past five years. While the existence of such tools has been known for some time, the records show that the authorities break into phones far more than previously understood — and that smartphones, with their vast troves of personal data, are not as impenetrable as Apple and Google have advertised. While many in law enforcement have argued that smartphones are often a roadblock to investigations, the findings indicate that they are instead one of the most important tools for prosecutions.

394

Google is Giving Data To Police Based on Search Keywords, Court Docs Show

There are few things as revealing as a person’s search history, and police typically need a warrant on a known suspect to demand that sensitive information. But a recently unsealed court document found that investigators can request such data in reverse order by asking Google to disclose everyone who searched a keyword rather than for information on a known suspect.

In August, police arrested Michael Williams, an associate of singer and accused sex offender R. Kelly, for allegedly setting fire to a witness’ car in Florida. Investigators linked Williams to the arson, as well as witness tampering, after sending a search warrant to Google that requested information on “users who had searched the address of the residence close in time to the arson.”

The July court filing was unsealed on Tuesday. Detroit News reporter Robert Snell tweeted about the filing after it was unsealed. Court documents showed that Google provided the IP addresses of people who searched for the arson victim’s address, which investigators tied to a phone number belonging to Williams. Police then used the phone number records to pinpoint the location of Williams’ device near the arson, according to court documents. The original warrant sent to Google is still sealed, but the report provides another example of a growing trend of data requests to the search engine giant in which investigators demand data on a large group of users rather than a specific request on a single suspect. “This ‘keyword warrant’ evades the Fourth Amendment checks on police surveillance,” said Albert Fox Cahn, the executive director of the Surveillance Technology Oversight Project. “When a court authorizes a data dump of every person who searched for a specific term or address, it’s likely unconstitutional.”

397

European Police Malware Could Harvest GPS, Messages, Passwords, More

The malware that French law enforcement deployed en masse onto Encrochat devices, a large encrypted phone network using Android phones, had the capability to harvest “all data stored within the device,” and was expected to include chat messages, geolocation data, usernames, passwords, and more, according to a document obtained by Motherboard. From the report:
The document adds more specifics around the law enforcement hack and subsequent takedown of Encrochat earlier this year. Organized crime groups across Europe and the rest of the world heavily used the network before its seizure, in many cases to facilitate large scale drug trafficking. The operation is one of, if not the, largest law enforcement mass hacking operation to date, with investigators obtaining more than a hundred million encrypted messages. “The NCA has been collaborating with the Gendarmerie on Encrochat for over 18 months, as the servers are hosted in France. The ultimate objective of this collaboration has been to identify and exploit any vulnerability in the service to obtain content,” the document reads, referring to both the UK’s National Crime Agency and one of the national police forces of France. As well as the geolocation, chat messages, and passwords, the law enforcement malware also told infected Encrochat devices to provide a list of WiFi access points near the device, the document reads.

450

Police Across Canada Are Using Predictive Policing Algorithms, Report Finds

Police across Canada are increasingly using controversial algorithms to predict where crimes could occur, who might go missing, and to help them determine where they should patrol, despite fundamental human rights concerns, a new report has found.

To Surveil and Predict: A Human Rights Analysis of Algorithmic Policing in Canada is the result of a joint investigation by the University of Toronto’s International Human Rights Program (IHRP) and Citizen Lab. It details how, in the words of the report’s authors, “law enforcement agencies across Canada have started to use, procure, develop, or test a variety of algorithmic policing methods,” with potentially dire consequences for civil liberties, privacy and other Charter rights, the authors warn.

The report breaks down how police are using or considering the use of algorithms for several purposes including predictive policing, which uses historical police data to predict where crime will occur in the future. Right now in Canada, police are using algorithms to analyze data about individuals to predict who might go missing, with the goal of one day using the technology in other areas of the criminal justice system. Some police services are using algorithms to automate the mass collection and analysis of public data, including social media posts, and to apply facial recognition to existing mugshot databases for investigative purposes. “Algorithmic policing technologies are present or under consideration throughout Canada in the forms of both predictive policing and algorithmic surveillance tools.” the report reads

433

Clearview AI CEO Says ‘Over 2,400 Police Agencies’ Are Using Its Facial Recognition Software

More than 2,400 police agencies have entered contracts with Clearview AI, a controversial facial recognition firm, according to comments made by Clearview AI CEO Hoan Ton-That in an interview with Jason Calacanis on YouTube.

The hour-long interview references an investigation by The New York Times published in January, which detailed how Clearview AI scraped data from sites including Facebook, YouTube, and Venmo to build its database. The scale of that database and the methods used to construct it were already controversial before the summer of protests against police violence. “It’s an honor to be at the center of the debate now and talk about privacy,” Ton-That says in the interview, going on to call the Times investigation “actually extremely fair.” “Since then, there’s been a lot of controversy, but fundamentally, this is such a great tool for society,” Ton-That says.

Ton-That also gave a few more details on how the business runs. Clearview is paid depending on how many licenses a client adds, among other factors, but Ton-That describes the licenses as “pretty inexpensive, compared to what’s come previously” in his interview. Ton-That ballparks Clearview’s fees as $2,000 a year for each officer with access. According to Ton-That, Clearview AI is primarily used by detectives.

Clearview AI was used at least once to identify protesters in Miami.

Facial recognition was also used by the New York Police Department to arrest an activist during the Black Lives Matter uprising this summer. According to a BuzzFeed News report in February, NYPD was at the time the largest user of Clearview AI — where more than 30 officers had Clearview accounts.

449

Police in Several US Cities Used Facial Recognition To Hunt Down and Arrest Protesters

Law enforcement in several cities, including New York and Miami, have reportedly been using controversial facial recognition software to track down and arrest individuals who allegedly participated in criminal activity during Black Lives Matter protests months after the fact. Miami police used Clearview AI to identify and arrest a woman for allegedly throwing a rock at a police officer during a May protest, local NBC affiliate WTVJ reported this week…

Similar reports have surfaced from around the country in recent weeks. Police in Columbia, South Carolina, and the surrounding county likewise used facial recognition, though from a different vendor, to arrest several protesters after the fact, according to local paper The State. Investigators in Philadelphia also used facial recognition software, from a third vendor, to identify protestors from photos posted to Instagram, The Philadelphia Inquirer reported.

431

Surveillance Company Banjo Used a Secret Company and Fake Apps To Scrape Social Media

Banjo, an artificial intelligence firm that works with police used a shadow company to create an array of Android and iOS apps that looked innocuous but were specifically designed to secretly scrape social media. The news signifies an abuse of data by a government contractor, with Banjo going far beyond what companies which scrape social networks usually do. Banjo created a secret company named Pink Unicorn Labs, according to three former Banjo employees, with two of them adding that the company developed the apps. This was done to avoid detection by social networks, two of the former employees said.

Three of the apps created by Pink Unicorn Labs were called “One Direction Fan App,” “EDM Fan App,” and “Formula Racing App.” Motherboard found these three apps on archive sites and downloaded and analyzed them, as did an independent expert. The apps — which appear to have been originally compiled in 2015 and were on the Play Store until 2016 according to Google — outwardly had no connection to Banjo, but an analysis of its code indicates connections to the company. This aspect of Banjo’s operation has some similarities with the Cambridge Analytica scandal, with multiple sources comparing the two incidents. […] The company has not publicly explained how it specifically scrapes social media apps. Motherboard found the apps developed by Pink Unicorn Labs included code mentioning signing into Facebook, Twitter, Instagram, Russian social media app VK, FourSquare, Google Plus, and Chinese social network Sina Weibo.
The apps could have scraped social media “by sending the saved login token to a server for Banjo to use later, or by using the app itself to scrape information,” reports Motherboard, noting that it’s not entirely clear which method Banjo used. “Motherboard found that the apps when opened made web requests to the domain ‘pulapi.com,’ likely referring to Pink Unicorn Labs, but the site that would provide a response to the app is currently down.”

Last weekend, Motherboard reported that Banjo signed a $20.7 million contract with Utah in 2019 that granted the company access to the state’s traffic, CCTV, and public safety cameras. “Banjo promises to combine that input with a range of other data such as satellites and social media posts to create a system that it claims alerts law enforcement of crimes or events in real-time.”

597

Cops Around the World Are Using An Outlandish Mind-Reading Tool

ProPublica reports that dozens of state and local agencies have purchased “SCAN” training from a company called LSI for reviewing a suspect’s written statements — even though there’s no scientific evidence that it works.

Local, state and federal agencies from the Louisville Metro Police Department to the Michigan State Police to the U.S. State Department have paid for SCAN training. The LSI website lists 417 agencies nationwide, from small-town police departments to the military, that have been trained in SCAN — and that list isn’t comprehensive, because additional ones show up in procurement databases and in public records obtained by ProPublica. Other training recipients include law enforcement agencies in Australia, Belgium, Canada, Israel, Mexico, the Netherlands, Singapore, South Africa and the United Kingdom, among others…

For Avinoam Sapir, the creator of SCAN, sifting truth from deception is as simple as one, two, three.

1. Give the subject a pen and paper.
2. Ask the subject to write down his/her version of what happened.
3. Analyze the statement and solve the case.

Those steps appear on the website for Sapir’s company, based in Phoenix. “SCAN Unlocks the Mystery!” the homepage says, alongside a logo of a question mark stamped on someone’s brain. The site includes dozens of testimonials with no names attached. “Since January when I first attended your course, everybody I meet just walks up to me and confesses!” one says. Another testimonial says “The Army finally got its money’s worth…” SCAN saves time, the site says. It saves money. Police can fax a questionnaire to a hundred people at once, the site says. Those hundred people can fax it back “and then, in less than an hour, the investigator will be able to review the questionnaires and solve the case.”

In 2009 the U.S. government created a special interagency task force to review scientific studies and independently investigate which interrogation techniques worked, assessed by the FBI, CIA and the U.S. Department of Defense. “When all 12 SCAN criteria were used in a laboratory study, SCAN did not distinguish truth-tellers from liars above the level of chance,” the review said, also challenging two of the method’s 12 criteria. “Both gaps in memory and spontaneous corrections have been shown to be indicators of truth, contrary to what is claimed by SCAN.”
In a footnote, the review identified three specific agencies that use SCAN: the FBI, CIA and U.S. Army military intelligence, which falls under the Department of Defense…

In 2016, the same year the federal task force released its review of interrogation techniques, four scholars published a study on SCAN in the journal Frontiers in Psychology. The authors — three from the Netherlands, one from England — noted that there had been only four prior studies in peer-reviewed journals on SCAN’s effectiveness. Each of those studies (in 1996, 2012, 2014 and 2015) concluded that SCAN failed to help discriminate between truthful and fabricated statements. The 2016 study found the same. Raters trained in SCAN evaluated 234 statements — 117 true, 117 false. Their results in trying to separate fact from fiction were about the same as chance….

Steven Drizin, a Northwestern University law professor who specializes in wrongful convictions, said SCAN and assorted other lie-detection tools suffer from “over-claim syndrome” — big claims made without scientific grounding. Asked why police would trust such tools, Drizin said: “A lot has to do with hubris — a belief on the part of police officers that they can tell when someone is lying to them with a high degree of accuracy. These tools play in to that belief and confirm that belief.”

SCAN’s creator “declined to be interviewed for this story,” but they spoke to some users of the technique. Travis Marsh, the head of an Indiana sheriff’s department, has been using the tool for nearly two decades, while acknowledging that he can’t explain how it works. “It really is, for lack of a better term, a faith-based system because you can’t see behind the curtain.”

Pro Publica also reports that “Years ago his wife left a note saying she and the kids were off doing one thing, whereas Marsh, analyzing her writing, could tell they had actually gone shopping. His wife has not left him another note in at least 15 years…”

546

US Police Already Using ‘Spot’ Robot From Boston Dynamics In the Real World

Massachusetts State Police (MSP) has been quietly testing ways to use the four-legged Boston Dynamics robot known as Spot, according to new documents obtained by the American Civil Liberties Union of Massachusetts. And while Spot isn’t equipped with a weapon just yet, the documents provide a terrifying peek at our RoboCop future.

The Spot robot, which was officially made available for lease to businesses last month, has been in use by MSP since at least April 2019 and has engaged in at least two police “incidents,” though it’s not clear what those incidents may have been. It’s also not clear whether the robots were being operated by a human controller or how much autonomous action the robots are allowed. MSP did not respond to Gizmodo’s emails on Monday morning.

The newly obtained documents, first reported by Ally Jarmanning at WBUR in Boston, include emails and contracts that shed some light on how police departments of the future may use robots to engage suspects without putting human police in harm’s way. In one document written by Lt. Robert G. Schumaker robots are described as an “invaluable component of tactical operations” that are vital to support the state’s “Homeland Security Strategy.” […] The question that remains is whether the American public will simply accept robocops as our reality now. Unfortunately, it seems like we may not have any choice in the matter — especially when the only way that we can learn about this new robot-police partnership is through records requests by the ACLU. And even then, we’re still largely in the dark about how these things will be used.

567

Police Can Keep Ring Camera Video Forever, and Share With Whomever They’d Like

Police officers who download videos captured by homeowners’ Ring doorbell cameras can keep them forever and share them with whomever they’d like without providing evidence of a crime, the Amazon-owned firm told a lawmaker this month… Police in those communities can use Ring software to request up to 12 hours of video from anyone within half a square mile of a suspected crime scene, covering a 45-day time span, wrote Brian Huseman, Amazon’s vice president of public policy. Police are required to include a case number for the crime they are investigating, but not any other details or evidence related to the crime or their request.

Sen. Edward Markey, D-Mass., said in a statement that Ring’s policies showed that the company had failed to enact basic safeguards to protect Americans’ privacy. “Connected doorbells are well on their way to becoming a mainstay of American households, and the lack of privacy and civil rights protections for innocent residents is nothing short of chilling,” he said. “If you’re an adult walking your dog or a child playing on the sidewalk, you shouldn’t have to worry that Ring’s products are amassing footage of you and that law enforcement may hold that footage indefinitely or share that footage with any third parties.”

While Ring tells users not to film public roads are sidewalks, Ring isn’t enforcing that, according to the article. Amazon argues that that’s ultimately the user’s responsibility.

And will their cameras start using facial recognition algorithms? Amazon answers that that feature is “contemplated but unreleased,” though they add that “We do frequently innovate based on customer demand,” and point out that other competing security cameras are already offering facial-recognition.

558

NYPD Kept an Illegal Database of Juvenile Fingerprints For Years

For years, the New York Police Department illegally maintained a database containing the fingerprints of thousands of children charged as juvenile delinquents–in direct violation of state law mandating that police destroy these records after turning them over to the state’s Division of Criminal Justice Services. When lawyers representing some of those youths discovered the violation, the police department dragged its feet, at first denying but eventually admitting that it was retaining prints it was supposed to have destroyed. Since 2015, attorneys with the Legal Aid Society, which represents the majority of youths charged in New York City family courts, had been locked in a battle with the police department over retention of the fingerprint records of children under the age of 16. The NYPD did not answer questions from The Intercept about its handling of the records, but according to Legal Aid, the police department confirmed to the organization last week that the database had been destroyed. To date, the department has made no public admission of wrongdoing, nor has it notified the thousands of people it impacted, although it has changed its fingerprint retention practices following Legal Aid’s probing. “The NYPD can confirm that the department destroys juvenile delinquent fingerprints after the prints have been transmitted to DCJS,” a police spokesperson wrote in a statement to The Intercept.

Still, the way the department handled the process–resisting transparency and stalling even after being threatened with legal action–raises concerns about how police handle a growing number of databases of personal information, including DNA and data obtained through facial recognition technology. As The Intercept has reported extensively, the NYPD also maintains a secretive and controversial “gang database,” which labels thousands of unsuspecting New Yorkers–almost all black or Latino youth–as “gang members” based on a set of broad and arbitrary criteria. The fact that police were able to violate the law around juvenile fingerprints for years without consequence underscores the need for greater transparency and accountability, which critics say can only come from independent oversight of the department.

It’s unclear how long the NYPD was illegally retaining these fingerprints, but the report says the state has been using the Automated Fingerprint Identification System since 1989, “and laws protecting juvenile delinquent records have been in place since at least 1977.” Legal Aid lawyers estimate that tens of thousands of juveniles could have had their fingerprints illegally retained by police.

598

Uber Stopped Its Own Investigators From Reporting Crimes To the Police

The special investigations team inside Uber, which fields complaints from riders and drivers, is not allowed to escalate those issues to law enforcement or file official police reports “even when they get confessions of felonies,” according to The Washington Post. They are also not allowed to advise victims or potential victims of crimes to seek legal counsel, according to the report, which was based on interviews with “more than 20 current and former investigators” who worked at Uber’s investigations unit in Arizona.

The investigators are also allegedly instructed to “first to protect Uber” and make sure it is “not held liable” for any crimes that are committed by people using the company’s ride-hailing platform. In that vein, the investigators told the paper that even the language they use when communicating with alleged victims is carefully worded to avoid the appearance that Uber is taking a side. The investigators also said they’re not supposed to specifically ask alleged perpetrators about claims against them.

540

Amazon’s ‘Ring’ Doorbells Creating A Massive Police Surveillance Network

“Police departments are piggybacking on Ring’s network to build out their surveillance networks…” reports CNET, adding that Ring “helps police avoid roadblocks for surveillance technology, whether a lack of funding or the public’s concerns about privacy.”

While residential neighborhoods aren’t usually lined with security cameras, the smart doorbell’s popularity has essentially created private surveillance networks powered by Amazon and promoted by police departments. Police departments across the country, from major cities like Houston to towns with fewer than 30,000 people, have offered free or discounted Ring doorbells to citizens, sometimes using taxpayer funds to pay for Amazon’s products.

While Ring owners are supposed to have a choice on providing police footage, in some giveaways, police require recipients to turn over footage when requested. Ring said Tuesday that it would start cracking down on those strings attached…

While more surveillance footage in neighborhoods could help police investigate crimes, the sheer number of cameras run by Amazon’s Ring business raises questions about privacy involving both law enforcement and tech giants… More than 50 local police departments across the US have partnered with Ring over the last two years, lauding how the Amazon-owned product allows them to access security footage in areas that typically don’t have cameras — on suburban doorsteps. But privacy advocates argue this partnership gives law enforcement an unprecedented amount of surveillance. “What we have here is a perfect marriage between law enforcement and one of the world’s biggest companies creating conditions for a society that few people would want to be a part of,” said Mohammad Tajsar, staff attorney at the ACLU of Southern California…

Despite its benefits, the relationship between police departments and Ring raises concerns about surveillance and privacy, as Amazon is working with law enforcement to blanket communities with cameras…. “Essentially, we’re creating a culture where everybody is the nosy neighbor looking out the window with their binoculars,” said Dave Maass, a senior investigative researcher at the Electronic Frontier Foundation. “It is creating this giant pool of data that allows the government to analyze our every move, whether or not a crime is being committed.” On a heat map of Bloomfield, there are hardly any spots in the New Jersey township out of sight of a Ring camera.

Tajsar says in some scenarios “they’re basically commandeering people’s homes as surveillance outposts for law enforcement,” and the articles notes that when police departments partner with Ring, “they have access to a law enforcement dashboard, where they can geofence areas and request footage filmed at specific times.”

While law enforcement “can only get footage from the app if residents choose to send it,” if the residents refuse, police can still try to obtain the footage with a subpoena to Amazon’s Ring.

621

Police using Google Images + Facial Recognition

“The New York Police Department used a photo of Woody Harrelson in its facial recognition program in an attempt to identify a beer thief who looked like the actor,” reports the Associated Press:

Georgetown University’s Center on Privacy and Technology highlighted the April 2017 episode in “Garbage In, Garbage Out,” a report on what it says are flawed practices in law enforcement’s use of facial recognition. The report says security footage of the thief was too pixelated and produced no matches while high-quality images of Harrelson, a three-time Oscar nominee, returned several possible matches and led to one arrest.

The NYPD also used a photo of a New York Knicks player to search its database for a man wanted for a Brooklyn assault, the report said.

“The stakes are too high in criminal investigations to rely on unreliable â” or wrong â” inputs,” Georgetown researcher Clare Garvie wrote…. The Georgetown report says facial recognition has helped the NYPD crack about 2,900 cases in more than five years of using the technology.

And in Florida, Vice reports, law enforcement agencies “run roughly 8,000 of these searches per month.”

587

The Feds Are Dropping Child Porn Cases Instead of Revealing Their Surveillance Systems

The Department of Justice has been dismissing child pornography cases in order to not reveal information about the software programs used as the basis for the charges. An array of cases suggest serious problems with the tech tools used by federal authorities. But the private entities who developed these tools won’t submit them for independent inspection or hand over hardly any information about how they work, their error rates, or other critical information. As a result, potentially innocent people are being smeared as pedophiles and prosecuted as child porn collectors, while potentially guilty people are going free so these companies can protect “trade secrets.” The situation suggests some of the many problems that can arise around public-private partnerships in catching criminals and the secretive digital surveillance software that it entails (software that’s being employed for far more than catching child predators).

With the child pornography cases, “the defendants are hardly the most sympathetic,” notes Tim Cushing at Techdirt. Yet that’s all the more reason why the government’s antics here are disturbing. Either the feds initially brought bad cases against people whom they just didn’t think would fight back, or they’re willing to let bad behavior go rather than face some public scrutiny. An extensive investigation by ProPublica “found more than a dozen cases since 2011 that were dismissed either because of challenges to the software’s findings, or the refusal by the government or the maker to share the computer programs with defense attorneys, or both,” writes Jack Gillum. Many more cases raised issues with the software as a defense. “Defense attorneys have long complained that the government’s secrecy claims may hamstring suspects seeking to prove that the software wrongly identified them,” notes Gillum. “But the growing success of their counterattack is also raising concerns that, by questioning the software used by investigators, some who trade in child pornography can avoid punishment.”

585

Microsoft Turned Down Facial-Recognition Sales over “Human Rights Concerns”

Microsoft recently rejected a California law enforcement agency’s request to install facial recognition technology in officers’ cars and body cameras due to human rights concerns, company President Brad Smith said on Tuesday. Microsoft concluded it would lead to innocent women and minorities being disproportionately held for questioning because the artificial intelligence has been trained on mostly white and male pictures. AI has more cases of mistaken identity with women and minorities, multiple research projects have found.

Smith explained the decisions as part of a commitment to human rights that he said was increasingly critical as rapid technological advances empower governments to conduct blanket surveillance, deploy autonomous weapons and take other steps that might prove impossible to reverse. Smith also said at a Stanford University conference that Microsoft had declined a deal to install facial recognition on cameras blanketing the capital city of an unnamed country that the nonprofit Freedom House had deemed not free. Smith said it would have suppressed freedom of assembly there.

On the other hand, Microsoft did agree to provide the technology to an American prison, after the company concluded that the environment would be limited and that it would improve safety inside the unnamed institution.

678

FamilyTreeDNA Deputizes Itself, Starts Pitching DNA Matching Services To Law Enforcement

One DNA-matching company has decided it’s going to corner an under-served market: US law enforcement. FamilyTreeDNA — last seen here opening up its database to the FBI without informing its users first — is actively pitching its services to law enforcement.

FamilyTreeDNA sounds like it’s finally going to seek consent from its customers, but only after having abused their trust once and under the assumption they’re all going to play ball. While some DNA companies like 23andMe are insisting on at least a subpoena before handing over access to DNA database search results, other companies are staying quiet about law enforcement access or specifically targeting law enforcement agencies with ads promising to help them work through their cold case files.

Consent is great, but it’s never going to be complete consent, no matter how FamilyTreeDNA shapes the argument. As Elizabeth Joh points out at Slate, there’s a whole lot of people involved who will never be asked for their consent once a customer agrees to allow DNA-matching sites to hand over their samples to law enforcement.

[W]hen you volunteer your DNA sample, you’re volunteering your genetic family tree, without having asked your parents, siblings, cousins, and distant cousins if they agree. That upends the usual way we think about providing information to law enforcement. You can’t give the police lawful consent to search your third cousin’s house, even if your third cousin (who you may never have met) is suspected of having been involved in a serious crime. Why are we allowing a distant relative to grant police permission to your DNA?

There’s no informed consent happening here. Customers are being treated as data points law enforcement can peruse at its leisure. A customer who agrees to be a good citizen (by clicking OK on a submission box on a private company’s website) may learn later their sample was used to lock up a close relative. Some people will be fine with this outcome. Others may regret being the critical piece of evidence used to incarcerate one of their relatives.

Whatever the case is, very few companies are being upfront about the effects of opening up database access to law enforcement. FamilyTreeDNA is using a crime victim’s parent and the founder’s Team Blue sympathies to hustle its customers towards compliance. Users who don’t like this turn of events will likely find it far more difficult to remove their DNA from FamilyTreeDNA’s database than simply hold their nose and become an willing part of this partnership.

560

Facebook Should Notify Users Who Interact With Fake Police ‘Sock Puppet’ Accounts

Despite Facebook’s repeated warnings that law enforcement is required to use “authentic identities” on the social media platform, cops continue to create fake and impersonator accounts to secretly spy on users. By pretending to be someone else, cops are able to sneak past the privacy walls users put up and bypass legal requirements that might require a warrant to obtain that same information.

EFF is now calling on Facebook to escalate the matter with law enforcement in the United States. Facebook should take the following actions to address the proliferation of fake/impersonator Facebook accounts operated by law enforcement, in addition to suspending the fake accounts. As part of its regular transparency reports, Facebook should publish data on the number of fake/impersonator law enforcement accounts identified, what agencies they belonged to, and what action was taken. When a fake/impersonator account is identified, Facebook should alert the users and groups that interacted with the account whether directly or indirectly.

The article also suggests updating Facebook’s Terms of Service to explicitly prohibit fake/impersonator profiles by law enforcement groups, and updating Facebook pages of law enforcement groups to inform visitors when those groups have a written policy allowing fake/impersonator law enforcement accounts. “These four changes are relatively light lifts that would enhance transparency and establish real consequences for agencies that deliberately violate the rules…”

“Facebook’s practice of taking down these individual accounts when they learn about them from the press (or from EFF) is insufficient to deter what we believe is a much larger iceberg beneath the surface.”

620