Resources

Police Found Ways to Use Facial Recognition Tech After Their Cities Banned It

As cities and states push to restrict the use of facial recognition technologies, some police departments have quietly found a way to keep using the controversial tools: asking for help from other law enforcement agencies that still have access. Officers in Austin and San Francisco — two of the largest cities where police are banned from using the technology — have repeatedly asked police in neighboring towns to run photos of criminal suspects through their facial recognition programs, according to a Washington Post review of police documents…

Austin police officers received the results of at least 13 face searches from a neighboring police department since the city’s 2020 ban — and appeared to get hits on some of them, according to documents obtained by The Post through public records requests and sources who shared them on the condition of anonymity. “That’s him! Thank you very much,” one Austin police officer wrote in response to an array of photos sent to him by an officer in Leander, Tex., who ran a facial recognition search, documents show. The man displayed in the pictures, John Curry Jr., was later charged with aggravated assault for allegedly charging toward someone with a knife, and is currently in jail awaiting trial. Curry’s attorney declined to comment.

“Police officers’ efforts to skirt these bans have not been previously reported and highlight the challenge of reining in police use of facial recognition,” the article concludes.

It also points out that the technology “has played a role in the wrongful arrests of at least seven innocent Americans,” according to the lawsuits they filed after charges against them were dismissed.

38

Taser Company Axon Is Selling AI That Turns Body Cam Audio Into Police Reports

Axon on Tuesday announced a new tool called Draft One that uses artificial intelligence built on OpenAI’s GPT-4 Turbo model to transcribe audio from body cameras and automatically turn it into a police report. Axon CEO Rick Smith told Forbes that police officers will then be able to review the document to ensure accuracy. From the report:
Axon claims one early tester of the tool, Fort Collins Colorado Police Department, has seen an 82% decrease in time spent writing reports. “If an officer spends half their day reporting, and we can cut that in half, we have an opportunity to potentially free up 25% of an officer’s time to be back out policing,” Smith said. These reports, though, are often used as evidence in criminal trials, and critics are concerned that relying on AI could put people at risk by depending on language models that are known to “hallucinate,” or make things up, as well as display racial bias, either blatantly or unconsciously.

“It’s kind of a nightmare,” said Dave Maass, surveillance technologies investigations director at the Electronic Frontier Foundation. “Police, who aren’t specialists in AI, and aren’t going to be specialists in recognizing the problems with AI, are going to use these systems to generate language that could affect millions of people in their involvement with the criminal justice system. What could go wrong?” Smith acknowledged there are dangers. “When people talk about bias in AI, it really is: Is this going to exacerbate racism by taking training data that’s going to treat people differently?” he told Forbes. “That was the main risk.”

Smith said Axon is recommending police don’t use the AI to write reports for incidents as serious as a police shooting, where vital information could be missed. “An officer-involved shooting is likely a scenario where it would not be used, and I’d probably advise people against it, just because there’s so much complexity, the stakes are so high.” He said some early customers are only using Draft One for misdemeanors, though others are writing up “more significant incidents,” including use-of-force cases. Axon, however, won’t have control over how individual police departments use the tools.

129

Can I phone a friend? How cops circumvent face recognition bans

Police in multiple major US cities have figured out a trick to circumvent their cities’ bans on facial recognition technology. Just ask a friend in a city without any such restrictions to do it for you.

It’s not immediately clear how widespread such side-stepping of facial recognition restrictions in the US may be. According to the Washington Post’s review of police documents in Austin, Texas, and San Francisco, however, it’s definitely something that’s gone on in at least two major US metropolitan areas.

WaPo reported over the weekend that SFPD and APD have both sought help from neighboring cop shops on occasion, though with different levels of success. San Francisco cops have reportedly sought facial recognition searches from other agencies on five occasions since 2019, but never got a match back; Austin police, on the other hand, have sent at least 13 searches since 2020.

“Some” of the searches from APD returned hits, and multiple suspects in Austin have been arrested and charged as a result of the searches, WaPo said.

117

The FBI Is Using Push Notifications To Track Criminals

The Post did a little digging into court records and found evidence of at least 130 search warrants filed by the feds for push notification data in cases spanning 14 states. In those cases, FBI officials asked tech companies like Google, Apple, and Facebook to fork over data related to a suspect’s mobile notifications, then used the data to implicate the suspect in criminal behavior linked to a particular app, even though many of those apps were supposedly anonymous communication platforms, like Wickr.

How exactly is this possible? Push notifications, which are provided by a mobile operating system provider, include embedded metadata that can be examined to understand the use of the mobile apps on a particular phone. Apps come laced with a quiet identifier, a “push token,” which is stored on the corporate servers of a company like Apple or another phone manufacturer after a user signs up to use a particular app. Those tokens can later be used to identify the person using the app, based on the information associated with the device on which the app was downloaded. Even turning off push notifications on your device doesn’t necessarily disable this feature, experts contend. […]

If finding new ways to catch pedophiles and terrorists doesn’t seem like the worst thing in the world, the Post article highlights the voices of critics who fear that this kind of mobile data could be used to track people who have not committed serious crimes — like political activists or women seeking abortions in states where the procedure has been restricted.

133

UK Government Seeks Expanded Use of AI-based Facial Recognition By Police

In a document released on Wednesday, the government outlined its ambitions to potentially deploy new biometric systems nationally over the next 12 to 18 months. The move comes after privacy campaigners and independent academics criticised the technology for being inaccurate and biased, particularly against darker-skinned people.

MPs have previously called for a moratorium on its use on the general population until clear laws are established by parliament. The government is calling for submissions from companies for technologies that “can resolve identity using facial features and landmarks,” including for live facial recognition which involves screening the general public for specific individuals on police watch lists.

In particular, the Home Office is highlighting its interest in novel artificial intelligence technologies that could process facial data efficiently to identify individuals, and software that could be integrated with existing technologies deployed by the department and with CCTV cameras. Facial recognition software has been used by South Wales Police and London’s Metropolitan Police over the past five years across multiple trials in public spaces including shopping centres, during events such as the Notting Hill Carnival and, more recently, during the coronation.

167

Cellebrite Asks Cops To Keep Its Phone Hacking Tech ‘Hush Hush’

For years, cops and other government authorities all over the world have been using phone hacking technology provided by Cellebrite to unlock phones and obtain the data within. And the company has been keen on keeping the use of its technology “hush hush.” As part of the deal with government agencies, Cellebrite asks users to keep its tech — and the fact that they used it — secret, TechCrunch has learned. This request concerns legal experts who argue that powerful technology like the one Cellebrite builds and sells, and how it gets used by law enforcement agencies, ought to be public and scrutinized.

In a leaked training video for law enforcement customers that was obtained by TechCrunch, a senior Cellebrite employee tells customers that “ultimately, you’ve extracted the data, it’s the data that solves the crime, how you got in, let’s try to keep that as hush hush as possible.” “We don’t really want any techniques to leak in court through disclosure practices, or you know, ultimately in testimony, when you are sitting in the stand, producing all this evidence and discussing how you got into the phone,” the employee, who we are not naming, says in the video.

154

Supreme Court Declines To Hear Challenge To Warrantless Pole Camera Surveillance

The U.S. Supreme Court [Monday] declined to hear Moore v. United States, leaving in place a patchwork of lower court decisions on an important and recurring question about privacy rights in the face of advancing surveillance technology. In this case, police secretly attached a small camera to a utility pole, using it to surveil a Massachusetts home 24/7 for eight months — all without a warrant. Law enforcement could watch the camera’s feed in real time, and remotely pan, tilt, and zoom close enough to read license plates and see faces. They could also review a searchable, digitized record of this footage at their convenience. The camera captured every coming and going of the home’s residents and their guests over eight months. As a result, the government targeted the home of a community pillar — a lawyer, respected judicial clerk, devoted church member, and a grandmother raising her grandkids — to cherry-pick images from months of unceasing surveillance in an effort to support unwarranted criminal charges against an innocent person.

Federal courts of appeals and state supreme courts have divided on the question of whether such sweeping surveillance is a Fourth Amendment search requiring a warrant. The highest courts of Massachusetts, Colorado, and South Dakota have held that long-term pole camera surveillance of someone’s home requires a warrant. In Moore v. United States, the members of the full en banc U.S. Court of Appeals for the First Circuit split evenly on the question, with three judges explaining that a warrant is required, and three judges expressing the belief that the Fourth Amendment imposes no limit on this invasive surveillance. This issue will continue to arise in the lower courts; the ACLU filed an amicus brief on the question in the U.S. Court of Appeals for the Tenth Circuit earlier this month.

222

NYPD Deploying Encryption To Block Public From Radio Scanner Broadcasts

While law enforcement has spent years fighting to make encryption less accessible for everyday people, police think they need a little more privacy. Critics worry a turn towards encryption by law enforcement could reduce transparency, hamstring the news media, and potentially jeopardize the safety of protesters looking to stay a step ahead.

According to amNY, the NYPD’s new plan would allow law enforcement officers discretion on whether or not to publicly disclose newsworthy incidents. That means the NYPD essentially would get to dictate the truth unchallenged in a number of potentially sensitive local stories. The report suggests police are floating the idea of letting members of the news media monitor certain radio transmissions through an NYPD-controlled mobile app. There’s a catch though. According to the report, the app would send radio information with a delay. Users may also have to pay a subscription fee to use the service, the paper said.

The NYPD confirmed its planning a “systems upgrade” in the coming years in an email to Gizmodo. “The NYPD is undergoing a systems upgrade that is underway and that will be complete after 2024,” a spokesperson for the Deputy Commissioner of Public Information said. “This infrastructure upgrade allows the NYPD to transmit in either an encrypted or non-encrypted format,” the NYPD said. “Some parts of the city have had the necessary equipment installed and the Department will begin testing the technology in these areas later this year. We are currently evaluating encryption best practices and will communicate new policies and procedures as we roll out this upgraded technology.” The spokesperson claimed the department intends to listen to and consider the needs of the news media during the transition process.

251

Scope creep: Woman Whose Rape Kit DNA Led To Her Arrest

A rape victim whose DNA from her sexual assault case was used by San Francisco police to arrest her in an unrelated property crime on Monday filed a lawsuit against the city. During a search of a San Francisco Police Department crime lab database, the woman’s DNA was tied to a burglary in late 2021. Her DNA had been collected and stored in the system as part of a 2016 domestic violence and sexual assault case, then-District Attorney Chesa Boudin said in February in a shocking revelation that raised privacy concerns. “This is government overreach of the highest order, using the most unique and personal thing we have — our genetic code — without our knowledge to try and connect us to crime,” the woman’s attorney, Adante Pointer, said in a statement.

The revelation prompted a national outcry from advocates, law enforcement, legal experts and lawmakers. Advocates said the practice could affect victims’ willingness to come forward to law enforcement authorities. Federal law already prohibits the inclusion of victims’ DNA in the national Combined DNA Index System. There is no corresponding law in California to prohibit local law enforcement databases from retaining victims’ profiles and searching them years later for entirely different purposes.

Boudin said the report was found among hundreds of pages of evidence against a woman who had been recently charged with a felony property crime. After learning the source of the DNA evidence, Boudin dropped the felony property crime charges against the woman. The police department’s crime lab stopped the practice shortly after receiving a complaint from the district attorney’s office and formally changed its operating procedure to prevent the misuse of DNA collected from sexual assault victims, Police Chief Bill Scott said. Scott said at a police commission meeting in March that he had discovered 17 crime victim profiles, 11 of them from rape kits, that were matched as potential suspects using a crime victims database during unrelated investigations. Scott said he believes the only person arrested was the woman who filed the lawsuit Monday.

246

Police Across US Bypass Warrants With Mass Location-Tracking Tool

As summer winds down, researchers warned this week about systemic vulnerabilities in mobile app infrastructure, as well as a new iOS security flaw and one in TikTok. And new findings about ways to exploit Microsoft’s Power Automate tool in Windows 11 show how it can be used to distribute malware, from ransomware to keyloggers and beyond.

Fog Reveal Tool Gives Law Enforcement Cheap Access to US Location-Tracking Data From Smartphones

The data broker Fog Data Science has been selling access to what it claims are billions of location data points from over 250 million smartphones to local, state, and federal law enforcement agencies around the US. The data comes from tech companies and cell phone towers and is collected in the Fog Reveal tool from thousands of iOS and Android apps. Crucially, access to the service is cheap, often costing local police departments less than $10,000 per year, and investigations by the Associated Press and Electronic Frontier Foundation found that law enforcement sometimes pulls location data without a warrant. The EFF conducted its investigation through more than 100 public records requests filed over several months. “Troublingly, those records show that Fog and some law enforcement did not believe Fog’s surveillance implicated people’s Fourth Amendment rights and required authorities to get a warrant,” the EFF wrote.

236

Google’s Nest Will Provide Data to Police Without a Warrant

Google “reserves the right” to make emergency disclosures to law enforcement even when there is no legal requirement to do so. “A provider like Google may disclose information to law enforcement without a subpoena or a warrant ‘if the provider, in good faith, believes that an emergency involving danger of death or serious physical injury to any person requires disclosure without delay of communications relating to the emergency,'” a Nest spokesperson tells CNET.

While Amazon and Google have both said they would hand over a user’s data to law enforcement without a warrant, Arlo, Apple, Wyze, and Anker, owner of Eufy, all confirmed to CNET that they won’t give authorities access to a user’s smart home camera’s footage unless they’re shown a warrant or court order. These companies would be legally bound to provide data to the authorities if they were shown a legal document. But, unlike Google and Amazon, they will not otherwise share camera footage with law enforcement, even if they had an emergency request for data. Apple’s default setting for video cameras connected via Homekit is end-to-end encryption which means the company is unable to share user video at all.

311

Amazon’s Ring and Google Can Share Footage With Police Without Warrants (or Your Consent)

U.S. law let’s companies like Google and Amazon’s Ring doorbell/security camera system “share user footage with police during emergencies without consent and without warrants.” That revelation “came under renewed criticism from privacy activists this month after disclosing it gave video footage to police in more than 10 cases without users’ consent thus far in 2022 in what it described as ’emergency situations’.”

“That includes instances where the police didn’t have a warrant.”

“So far this year, Ring has provided videos to law enforcement in response to an emergency request only 11 times,” Amazon vice president of public policy Brian Huseman wrote. “In each instance, Ring made a good-faith determination that there was an imminent danger of death or serious physical injury to a person requiring disclosure of information without delay….” Of the 11 emergency requests Ring has complied with so far in 2022, the company said they include cases involving kidnapping, self-harm and attempted murder, but it won’t provide further details, including information about which agencies or countries the requests came from.

We also asked Ring if it notified customers after the company had granted law enforcement access to their footage without their consent.

“We have nothing to share,” the spokesperson responded.

It’s been barely a year since Ring made the decision to stop allowing police to email users to request footage. Facing criticism that requests like those were subverting the warrant process and contributing to police overreach, Ring directed police instead to post public requests for assistance in the Neighbors app, where community members are free to view and comment on them (or opt out of seeing them altogether)… That post made no mention of a workaround for the police during emergency circumstances.

When CNET asked why that workaround wasn’t mentioned, Amazon response was that law enforcement requests, “including emergency requests, are directed to Ring (the company), the same way a warrant or subpoena is directed to Ring (and not the customer), which is why we treat them entirely separately.”

CNET notes there’s also no mention of warrantless emergency requests without independent oversight in Ring’s own transparency reports about law enforcement requests from past years.

CNET adds that it’s not just Amazon. “Google, Ring and other companies that process user video footage have a legal basis for warrantless disclosure without consent during emergency situations, and it’s up to them to decide whether or not to do so when the police come calling….” (Although Google told CNET that while it reserves the right to comply with warrantless requests for user data during emergencies, to date it has never actually done so.) The article also points out that “Others, most notably Apple, use end-to-end encryption as the default setting for user video, which blocks the company from sharing that video at all… Ring enabled end-to-end encryption as an option for users in 2021, but it isn’t the default setting, and Ring notes that turning it on will break certain features, including the ability to view your video feed on a third-party device like a smart TV, or even Amazon devices like the Echo Show smart display.”

The bottom line?

[C]onsumers have a choice to make about what they’re comfortable with… That said, you can’t make informed choices when you aren’t well-informed to begin with, and the brands in question don’t always make it easy to understand their policies and practices. Ring published a blog post last year walking through its new, public-facing format for police footage requests, but there was no mention of emergency exceptions granted without user consent or independent oversight, the details of which only came to light after a Senate probe. Google describes its emergency sharing policies within its Terms of Service, but the language doesn’t make it clear that those cases include instances where footage may be shared without a warrant, subpoena or court order compelling Google to do so.

223

San Francisco Police Are Using Driverless Cars as Mobile Surveillance Cameras

For the last five years, driverless car companies have been testing their vehicles on public roads. These vehicles constantly roam neighborhoods while laden with a variety of sensors including video cameras capturing everything going on around them in order to operate safely and analyze instances where they don’t.

While the companies themselves, such as Alphabet’s Waymo and General Motors’ Cruise, tout the potential transportation benefits their services may one day offer, they don’t publicize another use case, one that is far less hypothetical: Mobile surveillance cameras for police departments.

The use of AVs as an investigative tool echoes how Ring, a doorbell and home security company owned by Amazon, became a key partner with law enforcement around the country by turning individual consumer products into a network of cameras with comprehensive coverage of American neighborhoods easily accessible to police. Police departments around the country use automatic license plate readers (ALPRs) to track the movements of vehicles. The EFF has sued the SFPD for accessing business improvement district live cameras to spy on protestors.

332

Virginia Police Routinely Use Secret GPS Pings To Track People’s Cell Phones

The nonprofit online news site Virginia Mercury investigated their state police departments’ “real-time location warrants,” which are “addressed to telephone companies, ordering them to regularly ping a customers’ phone for its GPS location and share the results with police.” Public records requests submitted to a sampling of 18 police departments around the state found officers used the technique to conduct more than 7,000 days worth of surveillance in 2020. Court records show the tracking efforts spanned cases ranging from high-profile murders to minor larcenies…. Seven departments responded that they did not have any relevant billing records, indicating they don’t use the technique. Only one of the departments surveyed, Alexandria, indicated it had an internal policy governing how their officers use cellphone tracking, but a copy of the document provided by the city was entirely redacted….

Drug investigations accounted for more than 60 percent of the search warrants taken out in the two jurisdictions. Larcenies were the second most frequent category. Major crimes like murders, rapes and abductions made up a fraction of the tracking requests, accounting for just under 25 of the nearly 400 warrants filed in the jurisdictions that year.
America’s Supreme Court “ruled that warrantless cellphone tracking is unconstitutional back in 2012,” the article points out — but in practice those warrants aren’t hard to get. “Officers simply have to attest in an affidavit that they have probable cause that the tracking data is ‘relevant to a crime that is being committed or has been committed’…. There’s been limited public discussion or awareness of the kinds of tracking warrants the judiciary is approving.” “I don’t think people know that their cell phones can be converted to tracking devices by police with no notice,” said Steve Benjamin, a criminal defense lawyer in Richmond who said he’s recently noticed an uptick in cases in which officers employed the technique. “And the reality of modern life is everyone has their phone on them during the day and on their nightstand at night. … It’s as if the police tagged them with a chip under their skin, and people have no idea how easily this is accomplished.”
The case for these phone-tracking warrants?

  • The executive director of the Virginia Association of Chiefs of Police tells the site that physical surveillance ofen requires too many resources — and that cellphone tracking is safer. “It may be considered an intrusive way of gathering data on someone, but it’s certainly less dangerous than physical tracking.”
  • A spokesperson for the Chesterfield County police department [responsible for 64% of the state’s tracking] argued that “We exist to preserve human life and protect the vulnerable, and we will use all lawful tools at our disposal to do so.” And they added that such “continued robust enforcement efforts” were a part of the reason that the county’s still-rising number of fatal drug overdoses had not risen more.

The site also obtained bills from four major US cellphone carriers, and reported how much they were charging police for providing their cellphone-tracking services:

  • “T-Mobile charged $30 per day, which comes to $900 per month of tracking.”
  • “AT&T charged a monthly service fee of $100 and an additional $25 per day the service is utilized, which comes to $850 per 30 days of tracking…”
  • “Verizon calls the service ‘periodic location updates,’ charging $5 per day on top of a monthly service fee of $100, which comes to $200 per 30 days of tracking.”
  • “Sprint offered the cheapest prices to report locations back to law enforcement, charging a flat fee of $100 per month.”
367

Activist Raided By London Police After Downloading Docs Found On Google Search

The raid by four Metropolitan Police constables took place after Southwark campaigner Robert Hutchinson was reportedly accused of illegally entering a password-protected area of a website. “I was searching in Google and found links to board meeting minutes,” he told The Register. “Board reports, none of which were marked confidential. So I have no question that it was in the public domain.” The Southwark News reported that Hutchinson was arrested at 8.20am on 10 June this year at home following allegations made by Leathermarket Community Benefit Society (CBS). The society is a property development firm that wants to build flats over a children’s caged ball court in the south London borough, something Hutchinson “vocally opposes,” according to the local paper.

“There’s a directory, which you need to enter a password and a username to get into. But documents from that area were being published on Google,” explained Hutchinson. “I didn’t see a page saying ‘this is the directors’ area’ or anything like that, the documents were just available. They were just linked directly.” Police said in a statement that Hutchinson was arrested on suspicion of breaking section 1 of Britain’s Computer Misuse Act 1990 “between the 17th and 24th February 2021 and had published documents from the website on social media.” They added: “He was taken into custody and later released under investigation. Following a review of all available evidence, it was determined no offences had been committed and no further action was taken.”

Hutchinson said his identification by Leathermarket and subsequent arrest raised questions in his mind, saying police confirmed to him that the company had handed over an access log containing IP addresses: “Now, how that ended up with me being in the frame, I don’t know. There’s part of this that doesn’t add up…” While the property business did not respond to The Register’s request for comment at the time of publication, in a statement given to the Southwark News it said: “When it came to the CBS’s attention that confidential information had been accessed and subsequently shared via Twitter, the CBS made a general report of the data breach to the police â” who requested a full log of visitor access to the website before deciding whether or not to progress. The police carried out their own independent investigation into who accessed the documents and how, and have now concluded their investigation.” The prepared police statement did not explain whether investigators tested Leathermarket CBS’s version of events before arresting the campaigner.

350

Police Are Telling ShotSpotter To Alter Evidence From Gunshot-Detecting AI

On May 31 last year, 25-year-old Safarain Herring was shot in the head and dropped off at St. Bernard Hospital in Chicago by a man named Michael Williams. He died two days later. Chicago police eventually arrested the 64-year-old Williams and charged him with murder (Williams maintains that Herring was hit in a drive-by shooting). A key piece of evidence in the case is video surveillance footage showing Williams’ car stopped on the 6300 block of South Stony Island Avenue at 11:46 p.m. – the time and location where police say they know Herring was shot. How did they know that’s where the shooting happened? Police said ShotSpotter, a surveillance system that uses hidden microphone sensors to detect the sound and location of gunshots, generated an alert for that time and place. Except that’s not entirely true, according to recent court filings.

That night, 19 ShotSpotter sensors detected a percussive sound at 11:46 p.m. and determined the location to be 5700 South Lake Shore Drive – a mile away from the site where prosecutors say Williams committed the murder, according to a motion filed by Williams’ public defender. The company’s algorithms initially classified the sound as a firework. That weekend had seen widespread protests in Chicago in response to George Floyd’s murder, and some of those protesting lit fireworks. But after the 11:46 p.m. alert came in, a ShotSpotter analyst manually overrode the algorithms and “reclassified” the sound as a gunshot. Then, months later and after “post-processing,” another ShotSpotter analyst changed the alert’s coordinates to a location on South Stony Island Drive near where Williams’ car was seen on camera. “Through this human-involved method, the ShotSpotter output in this case was dramatically transformed from data that did not support criminal charges of any kind to data that now forms the centerpiece of the prosecution’s murder case against Mr. Williams,” the public defender wrote in the motion.

The document is what’s known as a Frye motion – a request for a judge to examine and rule on whether a particular forensic method is scientifically valid enough to be entered as evidence. Rather than defend ShotSpotter’s technology and its employees’ actions in a Frye hearing, the prosecutors withdrew all ShotSpotter evidence against Williams. The case isn’t an anomaly, and the pattern it represents could have huge ramifications for ShotSpotter in Chicago, where the technology generates an average of 21,000 alerts each year. The technology is also currently in use in more than 100 cities. Motherboard’s review of court documents from the Williams case and other trials in Chicago and New York State, including testimony from ShotSpotter’s favored expert witness, suggests that the company’s analysts frequently modify alerts at the request of police departments – some of which appear to be grasping for evidence that supports their narrative of events.

413

Maine Passes Facial Recognition

The new law prohibits government use of facial recognition except in specifically outlined situations, with the most broad exception being if police have probable cause that an unidentified person in an image committed a serious crime, or for proactive fraud prevention. Since Maine police will not have access to facial recognition, they will be able to ask the FBI and Maine Bureau of Motor Vehicles (BMV) to run these searches.

Crucially, the law plugs loopholes that police have used in the past to gain access to the technology, like informally asking other agencies or third parties to run backchannel searches for them. Logs of all facial recognition searches by the BMV must be created and are designated as public records. The only other state-wide facial recognition law was enacted by Washington in 2020, but many privacy advocates were dissatisfied with the specifics of the law. Maine’s new law also gives citizens the ability to sue the state if they’ve been unlawfully targeted by facial recognition, which was notably absent from Washington’s regulation. If facial recognition searches are performed illegally, they must be deleted and cannot be used as evidence.

427

A Government Watchdog May Have Missed Clearview AI Use By Five Federal Agencies

A government inquiry into federal agencies’ deployment of facial recognition may have overlooked some organizations’ use of popular biometric identification software Clearview AI, calling into question whether authorities can understand the extent to which the emerging technology has been used by taxpayer-funded entities. In a 92-page report published by the Government Accountability Office on Tuesday, five agencies — the US Capitol Police, the US Probation Office, the Pentagon Force Protection Agency, Transportation Security Administration, and the Criminal Investigation Division at the Internal Revenue Service — said they didn’t use Clearview AI between April 2018 and March 2020. This, however, contradicts internal Clearview data previously reviewed by BuzzFeed News.

In April, BuzzFeed News revealed that those five agencies were among more than 1,800 US taxpayer-funded entities that had employees who tried or used Clearview AI, based on internal company data. As part of that story, BuzzFeed News published a searchable table disclosing all the federal, state, and city government organizations whose employees are listed in the data as having used the facial recognition software as of February 2020. While the GAO was tasked with “review[ing] federal law enforcement use of facial recognition technology,” the discrepancies between the report, which was based on survey responses and BuzzFeed News’ past reporting, suggest that even the US government may not be equipped to track how its own agencies access to surveillance tools like Clearview. The GAO report surveyed 42 federal agencies in total, 20 of which reported that they either owned their own facial recognition system or used one developed by a third party between April 2018 and March 2020. Ten federal agencies — including Immigration and Customs Enforcement and Customs and Border Protection — said they specifically used Clearview AI.

422

How Big Tech created a data ‘treasure trove’ for police

When U.S. law enforcement officials need to cast a wide net for information, they’re increasingly turning to the vast digital ponds of personal data created by Big Tech companies via the devices and online services that have hooked billions of people around the world.

Data compiled by four of the biggest tech companies shows that law enforcement requests for user information — phone calls, emails, texts, photos, shopping histories, driving routes and more — have more than tripled in the U.S. since 2015. Police are also increasingly savvy about covering their tracks so as not to alert suspects of their interest.

That’s the backdrop for recent revelations that the Trump-era U.S. Justice Department sought data from Apple, Microsoft and Google about members of Congress, their aides and news reporters in leak investigations — then pursued court orders that blocked those companies from informing their targets.

In just the first half of 2020 — the most recent data available — Apple, Google, Facebook and Microsoft together fielded more than 112,000 data requests from local, state and federal officials. The companies agreed to hand over some data in 85% of those cases. Facebook, including its Instagram service, accounted for the largest number of disclosures.

Consider Newport, a coastal city of 24,000 residents that attracts a flood of summer tourists. Fewer than 100 officers patrol the city — but they make multiple requests a week for online data from tech companies.

That’s because most crimes — from larceny and financial scams to a recent fatal house party stabbing at a vacation rental booked online — can be at least partly traced on the internet. Tech providers, especially social media platforms, offer a “treasure trove of information” that can help solve them, said Lt. Robert Salter, a supervising police detective in Newport.

417

How Law Enforcement Gets Around Your Smartphone’s Encryption

Lawmakers and law enforcement agencies around the world, including in the United States, have increasingly called for backdoors in the encryption schemes that protect your data, arguing that national security is at stake. But new research indicates governments already have methods and tools that, for better or worse, let them access locked smartphones thanks to weaknesses in the security schemes of Android and iOS.

Cryptographers at Johns Hopkins University used publicly available documentation from Apple and Google as well as their own analysis to assess the robustness of Android and iOS encryption. They also studied more than a decade’s worth of reports about which of these mobile security features law enforcement and criminals have previously bypassed, or can currently, using special hacking tools…

once you unlock your device the first time after reboot, lots of encryption keys start getting stored in quick access memory, even while the phone is locked. At this point an attacker could find and exploit certain types of security vulnerabilities in iOS to grab encryption keys that are accessible in memory and decrypt big chunks of data from the phone. Based on available reports about smartphone access tools, like those from the Israeli law enforcement contractor Cellebrite and US-based forensic access firm Grayshift, the researchers realized that this is how almost all smartphone access tools likely work right now. It’s true that you need a specific type of operating system vulnerability to grab the keys — and both Apple and Google patch as many of those flaws as possible — but if you can find it, the keys are available, too…

Forensic tools exploiting the right vulnerability can grab even more decryption keys, and ultimately access even more data, on an Android phone.

554