Resources

Amazon Wants To Monitor You In Your Sleep, For Your Benefit

The Federal Communications Commission on Friday granted Amazon approval to use a radar sensor to sense motion and “enable contactless sleep tracing functionalities.” Amazon on June 22 asked the FCC, which regulates airwave uses, for permission to market a device that uses radar. The technology captures movement in three dimensions, enabling a user to control its features through simple gestures and movements, the company said in a filing. The capability, according to Amazon, could help people with “with mobility, speech, or tactile impairments,” and it could monitor sleep with a high degree of precision.

“The use of Radar Sensors in sleep tracking could improve awareness and management of sleep hygiene, which in turn could produce significant health benefits for many Americans,” Amazon said in its filing. “Radar Sensors will allow consumers to recognize potential sleep issues.” Amazon didn’t immediately respond to a request for comment. The company didn’t fully describe the device in its filing, but did say it would not be a mobile device. The FCC earlier granted similar permission to Google for radar to enable touchless control of Pixel smartphones, the agency said in its letter approving Amazon’s request.

478

Your Credit Score Should Be Based On Your Web History, IMF Says

In a new blog post for the International Monetary Fund, four researchers presented their findings from a working paper that examines the current relationship between finance and tech as well as its potential future. Gazing into their crystal ball, the researchers see the possibility of using the data from your browsing, search, and purchase history to create a more accurate mechanism for determining the credit rating of an individual or business. They believe that this approach could result in greater lending to borrowers who would potentially be denied by traditional financial institutions. At its heart, the paper is trying to wrestle with the dawning notion that the institutional banking system is facing a serious threat from tech companies like Google, Facebook, and Apple. The researchers identify two key areas in which this is true: Tech companies have greater access to soft-information, and messaging platforms can take the place of the physical locations that banks rely on for meeting with customers.

The concept of using your web history to inform credit ratings is framed around the notion that lenders rely on hard-data that might obscure the worthiness of a borrower or paint an unnecessarily dire picture during hard times. Citing soft-data points like “the type of browser and hardware used to access the internet, the history of online searches and purchases” that could be incorporated into evaluating a borrower, the researchers believe that when a lender has a more intimate relationship with the potential client’s history, they might be more willing to cut them some slack. […] But how would all this data be incorporated into credit ratings? Machine learning, of course. It’s black boxes all the way down. The researchers acknowledge that there will be privacy and policy concerns related to incorporating this kind of soft-data into credit analysis. And they do little to explain how this might work in practice.

365

How Big Tech created a data ‘treasure trove’ for police

When U.S. law enforcement officials need to cast a wide net for information, they’re increasingly turning to the vast digital ponds of personal data created by Big Tech companies via the devices and online services that have hooked billions of people around the world.

Data compiled by four of the biggest tech companies shows that law enforcement requests for user information — phone calls, emails, texts, photos, shopping histories, driving routes and more — have more than tripled in the U.S. since 2015. Police are also increasingly savvy about covering their tracks so as not to alert suspects of their interest.

That’s the backdrop for recent revelations that the Trump-era U.S. Justice Department sought data from Apple, Microsoft and Google about members of Congress, their aides and news reporters in leak investigations — then pursued court orders that blocked those companies from informing their targets.

In just the first half of 2020 — the most recent data available — Apple, Google, Facebook and Microsoft together fielded more than 112,000 data requests from local, state and federal officials. The companies agreed to hand over some data in 85% of those cases. Facebook, including its Instagram service, accounted for the largest number of disclosures.

Consider Newport, a coastal city of 24,000 residents that attracts a flood of summer tourists. Fewer than 100 officers patrol the city — but they make multiple requests a week for online data from tech companies.

That’s because most crimes — from larceny and financial scams to a recent fatal house party stabbing at a vacation rental booked online — can be at least partly traced on the internet. Tech providers, especially social media platforms, offer a “treasure trove of information” that can help solve them, said Lt. Robert Salter, a supervising police detective in Newport.

431

TikTok sued for billions over use of children’s data

Lawyers will allege that TikTok takes children’s personal information, including phone numbers, videos, exact location and biometric data, without sufficient warning, transparency or the necessary consent required by law, and without children or parents knowing what is being done with that information. TikTok has more than 800 million users worldwide and parent firm ByteDance made billions in profits last year, with the vast majority of that coming via advertising revenue.

472

US Intelligence may partner with private firms to monitor “extremist chatter” online

The Biden administration is considering using outside firms to track extremist chatter by Americans online, an effort that would expand the government’s ability to gather intelligence but could draw criticism over surveillance of US citizens. The Department of Homeland Security is limited in how it can monitor citizens online without justification and is banned from activities like assuming false identities to gain access to private messaging apps used by extremist groups such as the Proud Boys or Oath Keepers. Instead, federal authorities can only browse through unprotected information on social media sites like Twitter and Facebook and other open online platforms. A source familiar with the effort said it is not about decrypting data but rather using outside entities who can legally access these private groups to gather large amounts of information that could help DHS identify key narratives as they emerge. The plan being discussed inside DHS, according to multiple sources, would, in effect, allow the department to circumvent those limits.

Even as the DHS eyes a more robust use of its intelligence authorities, it continues to face fierce scrutiny on Capitol Hill over its handling of the Portland protests last summer — raising the possibility that at least some lawmakers will push back on the effort. The department — then led by Trump appointees but staffed by career officials, some of whom remain on the job — collected and disseminated open source reports on U.S. journalists who were publicly reporting on the protests.

516

Amazon Delivery Drivers Forced to Sign ‘Biometric Consent’ Form or Lose Job

Amazon delivery drivers nationwide have to sign a “biometric consent” form this week that grants the tech behemoth permission to use AI-powered cameras to access drivers’ location, movement, and biometric data. If the company’s delivery drivers, who number around 75,000 in the United States, refuse to sign these forms, they lose their jobs. The form requires drivers to agree to facial recognition and other biometric data collection within the trucks they drive.

“Amazon may… use certain Technology that processes Biometric Information, including on-board safety camera technology which collects your photograph for the purposes of confirming your identity and connecting you to your driver account,” the form reads. “Using your photograph, this Technology, may create Biometric Information, and collect, store, and use Biometric Information from such photographs.”

It adds that “this Technology tracks vehicle location and movement, including miles driven, speed, acceleration, braking, turns, and following distance …as a condition of delivery packages for Amazon, you consent to the use of Technology.”

554

Intelligence Analysts Use US Smartphone Location Data Without Warrants, Memo Says

A military arm of the intelligence community buys commercially available databases containing location data from smartphone apps and searches it for Americans’ past movements without a warrant, according to an unclassified memo obtained by The New York Times. Defense Intelligence Agency analysts have searched for the movements of Americans within a commercial database in five investigations over the past two and a half years, agency officials disclosed in a memo they wrote for Senator Ron Wyden, Democrat of Oregon.

The disclosure sheds light on an emerging loophole in privacy law during the digital age: In a landmark 2018 ruling known as the Carpenter decision, the Supreme Court held that the Constitution requires the government to obtain a warrant to compel phone companies to turn over location data about their customers. But the government can instead buy similar data from a broker — and does not believe it needs a warrant to do so. “D.I.A. does not construe the Carpenter decision to require a judicial warrant endorsing purchase or use of commercially available data for intelligence purposes,” the agency memo said.

Mr. Wyden has made clear that he intends to propose legislation to add safeguards for Americans’ privacy in connection with commercially available location data. In a Senate speech this week, he denounced circumstances “in which the government, instead of getting an order, just goes out and purchases the private records of Americans from these sleazy and unregulated commercial data brokers who are simply above the law.” He called the practice unacceptable and an intrusion on constitutional privacy rights. “The Fourth Amendment is not for sale,” he said.

507

NSO Used Real People’s Location Data To Pitch Its Contact-Tracing Tech

NSO, a private intelligence company best known for developing and selling governments access to its Pegasus spyware, pitched its contact-tracing system earlier this year, dubbed Fleming, aimed at helping governments track the spread of COVID-19. Fleming is designed to allow governments to feed location data from cell phone companies to visualize and track the spread of the virus. NSO gave several news outlets each a demo of Fleming, which NSO says helps governments make public health decisions “without compromising individual privacy.” But in May, a security researcher told TechCrunch that he found an exposed database storing thousands of location data points used by NSO to demonstrate how Fleming works — the same demo seen by reporters weeks earlier. TechCrunch reported the apparent security lapse to NSO, which quickly secured the database, but said that the location data was “not based on real and genuine data.” NSO’s claim that the location data wasn’t real differed from reports in Israeli media, which said NSO had used phone location data obtained from advertising platforms, known as data brokers, to “train” the system. Academic and privacy expert Tehilla Shwartz Altshuler, who was also given a demo of Fleming, said NSO told her that the data was obtained from data brokers, which sell access to vast troves of aggregate location data collected from the apps installed on millions of phones.

NSO is currently embroiled in a lawsuit with Facebook-owned WhatsApp, which last year blamed NSO for exploiting an undisclosed vulnerability in WhatsApp to infect some 1,400 phones with Pegasus, including journalists and human rights defenders. NSO says it should be afforded legal immunity because it acts on behalf of governments.

588

Dozens of Journalists’ iPhones Hacked With NSO ‘Zero-Click’ Spyware, Says Citizen Lab

For more than the past year, London-based reporter Rania Dridi and at least 36 journalists, producers and executives working for the Al Jazeera news agency were targeted with a so-called “zero-click” attack that exploited a now-fixed vulnerability in Apple’s iMessage. The attack invisibly compromised the devices without having to trick the victims into opening a malicious link. Citizen Lab, the internet watchdog at the University of Toronto, was asked to investigate earlier this year after one of the victims, Al Jazeera investigative journalist Tamer Almisshal, suspected that his phone may have been hacked. In a technical report out Sunday and shared with TechCrunch, the researchers say they believe the journalists’ iPhones were infected with the Pegasus spyware, developed by Israel-based NSO Group. The researchers analyzed Almisshal’s iPhone and found it had between July and August connected to servers known to be used by NSO for delivering the Pegasus spyware. The device revealed a burst of network activity that suggests that the spyware may have been delivered silently over iMessage. Logs from the phone show that the spyware was likely able to secretly record the microphone and phone calls, take photos using the phone’s camera, access the victim’s passwords, and track the phone’s location.

551

Surveillance Compounded: Real-Time Crime Centers in the United States

Over the last two decades, law enforcement agencies across the United States have been obtaining more and more sophisticated surveillance technologies to collect data. Technologies such as networked cameras, automated license plate readers, and gunshot detection are deployed around the clock, as are the tools to process this data, such as predictive policing software and AI-enhanced video analytics. The last five years have seen a distinct trend in which police have begun deploying all of this technology in conjunction with one another. The technologies, working in concert, are being consolidated and fed into physical locations called Real-Time Crime Centers (RTCCs). These high-tech hubs, filled with walls of TV monitors and computer workstations for sworn officers and civilian analysts, not only exploit huge amounts of data, but also are used to justify an increase in surveillance technology through new “data-driven” or “intelligence-led” policing strategies.

As part of the Atlas of Surveillance project, the Electronic Frontier Foundation and students from the Reynolds School of Journalism at the University of Nevada, Reno have identified more than 80 RTCCs across the United States, with heavy concentrations in the South and the Northeast. In this report, we highlight the capabilities and controversies surrounding 7 of these facilities. As this trend expands, it is crucial that the public understands how the technologies are combined to collect data about people as they move through their day-to-day lives.

566

What Modern Video Surveillance Looks Like

A few years ago, when you saw a security camera, you may have thought that the video feed went to a VCR somewhere in a back office that could only be accessed when a crime occurs. Or maybe you imagined a sleepy guard who only paid half-attention, and only when they discovered a crime in progress. In the age of internet-connectivity, now it’s easy to imagine footage sitting on a server somewhere, with any image inaccessible except to someone willing to fast forward through hundreds of hours of footage.

That may be how it worked in 1990s heist movies, and it may be how a homeowner still sorts through their own home security camera footage. But that’s not how cameras operate in today’s security environment. Instead, advanced algorithms are watching every frame on every camera and documenting every person, animal, vehicle, and backpack as they move through physical space, and thus camera to camera, over an extended period of time.

556

US Used Patriot Act To Gather Logs of Website Visitors

The government has interpreted a high-profile provision of the Patriot Act as empowering F.B.I. national security investigators to collect logs showing who has visited particular web pages, documents show. But the government stops short of using that law to collect the keywords people submit to internet search engines because it considers such terms to be content that requires a warrant to gather, according to letters produced by the Office of the Director of National Intelligence. The disclosures come at a time when Congress is struggling with new proposals to limit the law, known as Section 215 of the Patriot Act. The debate ran aground in the spring amid erratic messages from President Trump, but is expected to resume after President-elect Joseph R. Biden Jr. takes the oath of office in January.

In May, 59 senators voted to bar the use of Section 215 to collect internet search terms or web browsing activity, but negotiations broke down in the House. During that period, Senator Ron Wyden, Democrat of Oregon and one of the sponsors of the proposal ban, wrote to the director of national intelligence seeking clarity about any such use. Six months later, the Trump administration finally replied — initially, it turned out, in a misleading way. In a Nov. 6 letter to Mr. Wyden, John Ratcliffe, the intelligence director, wrote that Section 215 was not used to gather internet search terms, and that none of the 61 orders issued last year under that law by the Foreign Intelligence Surveillance Court involved collection of “web browsing” records. Mr. Wyden’s office provided that letter to The New York Times, arguing that it meant Mr. Wyden’s proposal in May — which he sponsored with Senator Steve Daines, Republican of Montana — could be enacted into law without any operational costs.

But The Times pressed Mr. Ratcliffe’s office and the F.B.I. to clarify whether it was defining “web browsing” activity to encompass logging all visitors to a particular website, in addition to a particular person’s browsing among different sites. The next day, the Justice Department sent a clarification to Mr. Ratcliffe’s office, according to a follow-up letter he sent to Mr. Wyden on Nov. 25. In fact, “one of those 61 orders resulted in the production of information that could be characterized as information regarding browsing,” Mr. Ratcliffe wrote in the second letter. Specifically, one order had approved collection of logs revealing which computers “in a specified foreign country” had visited “a single, identified U.S. web page.” Mr. Ratcliffe expressed regret “that this additional information was not included in my earlier letter” to the senator, and suggested his staff might take further “corrective action.” In a statement, Mr. Wyden said the letters raise “all kinds of new questions, including whether, in this particular case, the government has taken steps to avoid collecting Americans’ web browsing information.” “More generally,” Mr. Wyden continued, “the D.N.I. has provided no guarantee that the government wouldn’t use the Patriot Act to intentionally collect Americans’ web browsing information in the future, which is why Congress must pass the warrant requirement that has already received support from a bipartisan majority in the Senate.”

499

Google Illegally Spied On Workers Before Firing Them, US Labor Board Alleges

Google violated US labor laws by spying on workers who were organizing employee protests, then firing two of them, according to a complaint to be filed by the National Labor Relations Board (NLRB) today. The complaint names two employees, Laurence Berland and Kathryn Spiers, both of whom were fired by the company in late 2019 in connection with employee activism. Berland was organizing against Google’s decision to work with IRI Consultants, a firm widely known for its anti-union efforts, when he was let go for reviewing other employees’ calendars. Now, the NLRB has found Google’s policy against employees looking at certain coworkers’ calendars is unlawful. “Google’s hiring of IRI is an unambiguous declaration that management will no longer tolerate worker organizing,” Berland said in a statement. “Management and their union busting cronies wanted to send that message, and the NLRB is now sending their own message: worker organizing is protected by law.”

Spiers was fired after she created a pop-up for Google employees visiting the IRI Consultants website. “Googlers have the right to participate in protected concerted activities,” the notification read, according to The Guardian. The company said Spiers had violated security policies, a statement that hurt her reputation in the tech community. Now, the NLRB has found the firing was unlawful. “This week the NLRB issued a complaint on my behalf. They found that I was illegally terminated for trying to help my colleagues,” Spiers said. “Colleagues and strangers believe I abused my role because of lies told by Google management while they were retaliating against me. The NLRB can order Google to reinstate me, but it cannot reverse the harm done to my credibility.”

534

Microsoft Also Patented Tech to Score Meetings Using Filmed Body Language, Facial Expressions

Newly surfaced Microsoft patent filings describe a system for deriving and predicting “overall quality scores” for meetings using data such as body language, facial expressions, room temperature, time of day, and number of people in the meeting. The system uses cameras, sensors, and software tools to determine, for example, “how much a participant contributes to a meeting vs performing other tasks (e.g., texting, checking email, browsing the Internet).”

The “meeting insight computing system” would then predict the likelihood that a group will hold a high-quality meeting. It would flag potential challenges when an organizer is setting the meeting up, and recommend alternative venues, times, or people to include in the meeting, for example… A patent application made public Nov. 12 notes, “many organizations are plagued by overly long, poorly attended, and recurring meetings that could be modified and/or avoided if more information regarding meeting quality was available.” The approach would apply to in-person and virtual meetings, and hybrids of the two…

The filings do not detail any potential privacy safeguards. A Microsoft spokesperson declined to comment on the patent filings in response to GeekWire’s inquiry. To be sure, patents are not products, and there’s no sign yet that Microsoft plans to roll out this hypothetical system. Microsoft has established an internal artificial intelligence ethics office and a companywide committee to ensure that its AI products live by its principles of responsible AI, including transparency and privacy. However, the filings are a window into the ideas floating around inside Microsoft, and they’re consistent with the direction the company is already heading.

567

Demand For Employee Surveillance Increased As Workers Transitioned To Home Working

A new study shows that the demand for employee surveillance software was up 55% in June 2020 compared to the pre-pandemic average. From webcam access to random screenshot monitoring, these surveillance software products can record almost everything an employee does on their computer. VPN review website Top10VPN used its global monitoring data to analyze over 200 terms related to employee surveillance software. It took into account both generic and brand-specific queries for its study which compared searches during March-May 2020 with internet searches in the preceding year. Global demand for employee monitoring software increased by 108% in April, and 70% in May 2020 compared with searches carried out the preceding year. Queries for “How to monitor employees working from home” increased by 1,705% in April and 652% in May 2020 compared with searches carried out the preceding year.

The surge in popularity of such an open-ended phrase like this reveals how unprepared many companies were for the abrupt shift to mass home-working. The most popular surveillance tools are Time Doctor, Hubstaff, and FlexiSPY. The tools with the biggest increase in demand include Teramind, DeskTime, Kickidler, and Time Doctor, with interest for the latter tripling compared to the pre-pandemic levels. The top three tools account for almost 60% of global demand in surveillance software because of the range of features offered. The radical shift away from office-working has clearly made employers nervous about a reduction in productivity and its potential impact on their business. Greater surveillance, however, may actually reduce long-term productivity. Your boss watching your every move may make you less productive in the long run and could significantly impact your feelings about the company itself.

505

LidarPhone Attack Converts Smart Vacuums Into Microphones

A team of academics has detailed this week novel research that converted a smart vacuum cleaner into a microphone capable of recording nearby conversations. Named LidarPhone, the technique works by taking the vacuum’s built-in LiDAR laser-based navigational component and converting it into a laser microphone. […] They tested the LidarPhone attack with various objects, by varying the distance between the robot and the object, and the distance between the sound origin and the object. Tests focused on recovering numerical values, which the research team said they managed to recover with a 90% accuracy. But academics said the technique could also be used to identify speakers based on gender or even determine their political orientation from the music played during news shows, captured by the vacuum’s LiDAR.

But while the LidarPhone attack sounds like a gross invasion of privacy, users need not panic for the time being. This type of attack revolves around many prerequisites that most attacks won’t bother. There are far easier ways of spying on users than overwriting a vacuum’s firmware to control its laser navigation system, such as tricking the user on installing malware on their phone. The LidarPhone attack is merely novel academic research that can be used to bolster the security and design of future smart vacuum robots. In fact, the research team’s main recommended countermeasure for smart vacuum cleaning robot makers is to shut down the LiDAR component if it’s not rotating. Additional details about the research are available in a research paper titled “Spying with Your Robot Vacuum Cleaner: Eavesdropping via Lidar Sensors.”

471

Cheating-Detection Software Provokes ‘School-Surveillance Revolt’

New webcam-based anti-cheating monitoring is so stressful, it’s made some students cry, the Washington Post reports.

“Online proctoring” companies saw in coronavirus shutdowns a chance to capitalize on a major reshaping of education, selling schools a high-tech blend of webcam-watching workers and eye-tracking software designed to catch students cheating on their exams. They’ve taken in millions of dollars, some of it public money, from thousands of colleges in recent months. But they’ve also sparked a nationwide school-surveillance revolt, with students staging protests and adopting creative tactics to push campus administrators to reconsider the deals. Students argue that the testing systems have made them afraid to click too much or rest their eyes for fear they’ll be branded as cheats…

One system, Proctorio, uses gaze-detection, face-detection and computer-monitoring software to flag students for any “abnormal” head movement, mouse movement, eye wandering, computer window resizing, tab opening, scrolling, clicking, typing, and copies and pastes. A student can be flagged for finishing the test too quickly, or too slowly, clicking too much, or not enough. If the camera sees someone else in the background, a student can be flagged for having “multiple faces detected.” If someone else takes the test on the same network — say, in a dorm building — it’s potential “exam collusion.” Room too noisy, Internet too spotty, camera on the fritz? Flag, flag, flag.

As an unusually disrupted fall semester churns toward finals, this student rebellion has erupted into online war, with lawsuits, takedowns and viral brawls further shaking the anxiety-inducing backdrop of college exams. Some students have even tried to take the software down from the inside, digging through the code for details on how it monitors millions of high-stakes exams… Some students said the experience of having strangers and algorithms silently judge their movements was deeply unnerving, and many worried that even being accused of cheating could endanger their chances at good grades, scholarships, internships and post-graduation careers. Several students said they had hoped for freeing, friend-filled college years but were now resigned to hours of monitored video exams in their childhood bedrooms, with no clear end in sight….

[T]he systems’ technical demands have made just taking the tests almost comically complicated. One student at Wilfrid Laurier University in Ontario shared the instructions for his online Introduction to Linear Algebra midterm: five pages, totaling more than 2,000 words, requiring students to use a special activity-monitoring Web browser and keep their face, hands and desk in view of their camera at all times…

Students who break the rules or face technical difficulties can be investigated for academic misconduct. “The instructions,” the student said, “are giving me more anxiety than the test itself.”

Company executives “say a semester without proctors would turn online testing into a lawless wasteland” according to the article. But one long-time teacher counters that “the most clear value conveyed to students is ‘We don’t trust you.'”

Yet the education tech nonprofit Educause reported that 54% of higher education institutions they’d surveyed “are currently using online or remote proctoring services.

“And another 23% are planning or considering using them.”

499

Police Will Pilot a Program to Live-Stream Amazon Ring Cameras

The police surveillance center in Jackson, Mississippi, will be conducting a 45-day pilot program to live stream the Amazon Ring cameras of participating residents.

While people buy Ring cameras and put them on their front door to keep their packages safe, police use them to build comprehensive CCTV camera networks blanketing whole neighborhoods. This serves two police purposes. First, it allows police departments to avoid the cost of buying surveillance equipment and to put that burden onto consumers by convincing them they need cameras to keep their property safe. Second, it evades the natural reaction of fear and distrust that many people would have if they learned police were putting up dozens of cameras on their block, one for every house.

Now, our worst fears have been confirmed. Police in Jackson, Mississippi, have started a pilot program that would allow Ring owners to patch the camera streams from their front doors directly to a police Real Time Crime Center. The footage from your front door includes you coming and going from your house, your neighbors taking out the trash, and the dog walkers and delivery people who do their jobs in your street. In Jackson, this footage can now be live streamed directly onto a dozen monitors scrutinized by police around the clock. Even if you refuse to allow your footage to be used that way, your neighbor’s camera pointed at your house may still be transmitting directly to the police.

548

Google is Giving Data To Police Based on Search Keywords, Court Docs Show

There are few things as revealing as a person’s search history, and police typically need a warrant on a known suspect to demand that sensitive information. But a recently unsealed court document found that investigators can request such data in reverse order by asking Google to disclose everyone who searched a keyword rather than for information on a known suspect.

In August, police arrested Michael Williams, an associate of singer and accused sex offender R. Kelly, for allegedly setting fire to a witness’ car in Florida. Investigators linked Williams to the arson, as well as witness tampering, after sending a search warrant to Google that requested information on “users who had searched the address of the residence close in time to the arson.”

The July court filing was unsealed on Tuesday. Detroit News reporter Robert Snell tweeted about the filing after it was unsealed. Court documents showed that Google provided the IP addresses of people who searched for the arson victim’s address, which investigators tied to a phone number belonging to Williams. Police then used the phone number records to pinpoint the location of Williams’ device near the arson, according to court documents. The original warrant sent to Google is still sealed, but the report provides another example of a growing trend of data requests to the search engine giant in which investigators demand data on a large group of users rather than a specific request on a single suspect. “This ‘keyword warrant’ evades the Fourth Amendment checks on police surveillance,” said Albert Fox Cahn, the executive director of the Surveillance Technology Oversight Project. “When a court authorizes a data dump of every person who searched for a specific term or address, it’s likely unconstitutional.”

531

Police Are Using Facial Recognition For Minor Crimes, ‘Because They Can’

In a recent court filing, the New York police department noted that it’s turned to facial recognition in more than 22,000 cases in the last three years. “Even though the NYPD claims facial recognition is only used for serious crimes, the numbers tell a different story,” said Albert Fox Cahn, the executive director of the Surveillance Technology Oversight Project. “As facial recognition continues to grow, it’s being routinely deployed for everything from shoplifting to graffiti.”

Asked for comment, an NYPD spokeswoman pointed to a 2019 opinion article by police commissioner James O’Neill titled “How Facial Recognition Makes You Safer.” In the piece, O’Neill talked about how facial recognition had been used to make arrests in murder, robbery and rape cases, but he didn’t disclose how often it was used for low-level crimes. The department’s facial recognition policy, established in March, allows the technology to be used for any crime, no matter the severity. Without any limits, police have more frequently used the technology for petty thefts than the dangerous crimes, privacy advocates say. Before Amazon put a moratorium on police use of its Rekognition face-identifying software, the program was used in a $12 shoplifting case in Oregon in 2018…

Without any limits, police can use facial recognition however they please, and in many cases, arrested suspects don’t even know that the flawed technology was used… Attorneys representing protesters in Miami didn’t know that police used facial recognition in their arrests, according to an NBC Miami report. Police used facial recognition software in a $50 drug dealing case in Florida in 2016 but made no mention of it in the arrest report.

The article also notes that as recently as this Tuesday, Hoan Ton-That, the CEO of facial recognition startup Clearview AI “said it isn’t the company’s responsibility to make sure its technology is being properly used by its thousands of police partners.

“Though the company has its own guidelines, Ton-That said Clearview AI wouldn’t be enforcing them, saying that ‘it’s not our job to set the policy as a tech company…'”

561