Resources

Intelligence Analysts Use US Smartphone Location Data Without Warrants, Memo Says

A military arm of the intelligence community buys commercially available databases containing location data from smartphone apps and searches it for Americans’ past movements without a warrant, according to an unclassified memo obtained by The New York Times. Defense Intelligence Agency analysts have searched for the movements of Americans within a commercial database in five investigations over the past two and a half years, agency officials disclosed in a memo they wrote for Senator Ron Wyden, Democrat of Oregon.

The disclosure sheds light on an emerging loophole in privacy law during the digital age: In a landmark 2018 ruling known as the Carpenter decision, the Supreme Court held that the Constitution requires the government to obtain a warrant to compel phone companies to turn over location data about their customers. But the government can instead buy similar data from a broker — and does not believe it needs a warrant to do so. “D.I.A. does not construe the Carpenter decision to require a judicial warrant endorsing purchase or use of commercially available data for intelligence purposes,” the agency memo said.

Mr. Wyden has made clear that he intends to propose legislation to add safeguards for Americans’ privacy in connection with commercially available location data. In a Senate speech this week, he denounced circumstances “in which the government, instead of getting an order, just goes out and purchases the private records of Americans from these sleazy and unregulated commercial data brokers who are simply above the law.” He called the practice unacceptable and an intrusion on constitutional privacy rights. “The Fourth Amendment is not for sale,” he said.

440

NSO Used Real People’s Location Data To Pitch Its Contact-Tracing Tech

NSO, a private intelligence company best known for developing and selling governments access to its Pegasus spyware, pitched its contact-tracing system earlier this year, dubbed Fleming, aimed at helping governments track the spread of COVID-19. Fleming is designed to allow governments to feed location data from cell phone companies to visualize and track the spread of the virus. NSO gave several news outlets each a demo of Fleming, which NSO says helps governments make public health decisions “without compromising individual privacy.” But in May, a security researcher told TechCrunch that he found an exposed database storing thousands of location data points used by NSO to demonstrate how Fleming works — the same demo seen by reporters weeks earlier. TechCrunch reported the apparent security lapse to NSO, which quickly secured the database, but said that the location data was “not based on real and genuine data.” NSO’s claim that the location data wasn’t real differed from reports in Israeli media, which said NSO had used phone location data obtained from advertising platforms, known as data brokers, to “train” the system. Academic and privacy expert Tehilla Shwartz Altshuler, who was also given a demo of Fleming, said NSO told her that the data was obtained from data brokers, which sell access to vast troves of aggregate location data collected from the apps installed on millions of phones.

NSO is currently embroiled in a lawsuit with Facebook-owned WhatsApp, which last year blamed NSO for exploiting an undisclosed vulnerability in WhatsApp to infect some 1,400 phones with Pegasus, including journalists and human rights defenders. NSO says it should be afforded legal immunity because it acts on behalf of governments.

505

Dozens of Journalists’ iPhones Hacked With NSO ‘Zero-Click’ Spyware, Says Citizen Lab

For more than the past year, London-based reporter Rania Dridi and at least 36 journalists, producers and executives working for the Al Jazeera news agency were targeted with a so-called “zero-click” attack that exploited a now-fixed vulnerability in Apple’s iMessage. The attack invisibly compromised the devices without having to trick the victims into opening a malicious link. Citizen Lab, the internet watchdog at the University of Toronto, was asked to investigate earlier this year after one of the victims, Al Jazeera investigative journalist Tamer Almisshal, suspected that his phone may have been hacked. In a technical report out Sunday and shared with TechCrunch, the researchers say they believe the journalists’ iPhones were infected with the Pegasus spyware, developed by Israel-based NSO Group. The researchers analyzed Almisshal’s iPhone and found it had between July and August connected to servers known to be used by NSO for delivering the Pegasus spyware. The device revealed a burst of network activity that suggests that the spyware may have been delivered silently over iMessage. Logs from the phone show that the spyware was likely able to secretly record the microphone and phone calls, take photos using the phone’s camera, access the victim’s passwords, and track the phone’s location.

457

Surveillance Compounded: Real-Time Crime Centers in the United States

Over the last two decades, law enforcement agencies across the United States have been obtaining more and more sophisticated surveillance technologies to collect data. Technologies such as networked cameras, automated license plate readers, and gunshot detection are deployed around the clock, as are the tools to process this data, such as predictive policing software and AI-enhanced video analytics. The last five years have seen a distinct trend in which police have begun deploying all of this technology in conjunction with one another. The technologies, working in concert, are being consolidated and fed into physical locations called Real-Time Crime Centers (RTCCs). These high-tech hubs, filled with walls of TV monitors and computer workstations for sworn officers and civilian analysts, not only exploit huge amounts of data, but also are used to justify an increase in surveillance technology through new “data-driven” or “intelligence-led” policing strategies.

As part of the Atlas of Surveillance project, the Electronic Frontier Foundation and students from the Reynolds School of Journalism at the University of Nevada, Reno have identified more than 80 RTCCs across the United States, with heavy concentrations in the South and the Northeast. In this report, we highlight the capabilities and controversies surrounding 7 of these facilities. As this trend expands, it is crucial that the public understands how the technologies are combined to collect data about people as they move through their day-to-day lives.

456

What Modern Video Surveillance Looks Like

A few years ago, when you saw a security camera, you may have thought that the video feed went to a VCR somewhere in a back office that could only be accessed when a crime occurs. Or maybe you imagined a sleepy guard who only paid half-attention, and only when they discovered a crime in progress. In the age of internet-connectivity, now it’s easy to imagine footage sitting on a server somewhere, with any image inaccessible except to someone willing to fast forward through hundreds of hours of footage.

That may be how it worked in 1990s heist movies, and it may be how a homeowner still sorts through their own home security camera footage. But that’s not how cameras operate in today’s security environment. Instead, advanced algorithms are watching every frame on every camera and documenting every person, animal, vehicle, and backpack as they move through physical space, and thus camera to camera, over an extended period of time.

451

US Used Patriot Act To Gather Logs of Website Visitors

The government has interpreted a high-profile provision of the Patriot Act as empowering F.B.I. national security investigators to collect logs showing who has visited particular web pages, documents show. But the government stops short of using that law to collect the keywords people submit to internet search engines because it considers such terms to be content that requires a warrant to gather, according to letters produced by the Office of the Director of National Intelligence. The disclosures come at a time when Congress is struggling with new proposals to limit the law, known as Section 215 of the Patriot Act. The debate ran aground in the spring amid erratic messages from President Trump, but is expected to resume after President-elect Joseph R. Biden Jr. takes the oath of office in January.

In May, 59 senators voted to bar the use of Section 215 to collect internet search terms or web browsing activity, but negotiations broke down in the House. During that period, Senator Ron Wyden, Democrat of Oregon and one of the sponsors of the proposal ban, wrote to the director of national intelligence seeking clarity about any such use. Six months later, the Trump administration finally replied — initially, it turned out, in a misleading way. In a Nov. 6 letter to Mr. Wyden, John Ratcliffe, the intelligence director, wrote that Section 215 was not used to gather internet search terms, and that none of the 61 orders issued last year under that law by the Foreign Intelligence Surveillance Court involved collection of “web browsing” records. Mr. Wyden’s office provided that letter to The New York Times, arguing that it meant Mr. Wyden’s proposal in May — which he sponsored with Senator Steve Daines, Republican of Montana — could be enacted into law without any operational costs.

But The Times pressed Mr. Ratcliffe’s office and the F.B.I. to clarify whether it was defining “web browsing” activity to encompass logging all visitors to a particular website, in addition to a particular person’s browsing among different sites. The next day, the Justice Department sent a clarification to Mr. Ratcliffe’s office, according to a follow-up letter he sent to Mr. Wyden on Nov. 25. In fact, “one of those 61 orders resulted in the production of information that could be characterized as information regarding browsing,” Mr. Ratcliffe wrote in the second letter. Specifically, one order had approved collection of logs revealing which computers “in a specified foreign country” had visited “a single, identified U.S. web page.” Mr. Ratcliffe expressed regret “that this additional information was not included in my earlier letter” to the senator, and suggested his staff might take further “corrective action.” In a statement, Mr. Wyden said the letters raise “all kinds of new questions, including whether, in this particular case, the government has taken steps to avoid collecting Americans’ web browsing information.” “More generally,” Mr. Wyden continued, “the D.N.I. has provided no guarantee that the government wouldn’t use the Patriot Act to intentionally collect Americans’ web browsing information in the future, which is why Congress must pass the warrant requirement that has already received support from a bipartisan majority in the Senate.”

404

Google Illegally Spied On Workers Before Firing Them, US Labor Board Alleges

Google violated US labor laws by spying on workers who were organizing employee protests, then firing two of them, according to a complaint to be filed by the National Labor Relations Board (NLRB) today. The complaint names two employees, Laurence Berland and Kathryn Spiers, both of whom were fired by the company in late 2019 in connection with employee activism. Berland was organizing against Google’s decision to work with IRI Consultants, a firm widely known for its anti-union efforts, when he was let go for reviewing other employees’ calendars. Now, the NLRB has found Google’s policy against employees looking at certain coworkers’ calendars is unlawful. “Google’s hiring of IRI is an unambiguous declaration that management will no longer tolerate worker organizing,” Berland said in a statement. “Management and their union busting cronies wanted to send that message, and the NLRB is now sending their own message: worker organizing is protected by law.”

Spiers was fired after she created a pop-up for Google employees visiting the IRI Consultants website. “Googlers have the right to participate in protected concerted activities,” the notification read, according to The Guardian. The company said Spiers had violated security policies, a statement that hurt her reputation in the tech community. Now, the NLRB has found the firing was unlawful. “This week the NLRB issued a complaint on my behalf. They found that I was illegally terminated for trying to help my colleagues,” Spiers said. “Colleagues and strangers believe I abused my role because of lies told by Google management while they were retaliating against me. The NLRB can order Google to reinstate me, but it cannot reverse the harm done to my credibility.”

437

Microsoft Also Patented Tech to Score Meetings Using Filmed Body Language, Facial Expressions

Newly surfaced Microsoft patent filings describe a system for deriving and predicting “overall quality scores” for meetings using data such as body language, facial expressions, room temperature, time of day, and number of people in the meeting. The system uses cameras, sensors, and software tools to determine, for example, “how much a participant contributes to a meeting vs performing other tasks (e.g., texting, checking email, browsing the Internet).”

The “meeting insight computing system” would then predict the likelihood that a group will hold a high-quality meeting. It would flag potential challenges when an organizer is setting the meeting up, and recommend alternative venues, times, or people to include in the meeting, for example… A patent application made public Nov. 12 notes, “many organizations are plagued by overly long, poorly attended, and recurring meetings that could be modified and/or avoided if more information regarding meeting quality was available.” The approach would apply to in-person and virtual meetings, and hybrids of the two…

The filings do not detail any potential privacy safeguards. A Microsoft spokesperson declined to comment on the patent filings in response to GeekWire’s inquiry. To be sure, patents are not products, and there’s no sign yet that Microsoft plans to roll out this hypothetical system. Microsoft has established an internal artificial intelligence ethics office and a companywide committee to ensure that its AI products live by its principles of responsible AI, including transparency and privacy. However, the filings are a window into the ideas floating around inside Microsoft, and they’re consistent with the direction the company is already heading.

442

Demand For Employee Surveillance Increased As Workers Transitioned To Home Working

A new study shows that the demand for employee surveillance software was up 55% in June 2020 compared to the pre-pandemic average. From webcam access to random screenshot monitoring, these surveillance software products can record almost everything an employee does on their computer. VPN review website Top10VPN used its global monitoring data to analyze over 200 terms related to employee surveillance software. It took into account both generic and brand-specific queries for its study which compared searches during March-May 2020 with internet searches in the preceding year. Global demand for employee monitoring software increased by 108% in April, and 70% in May 2020 compared with searches carried out the preceding year. Queries for “How to monitor employees working from home” increased by 1,705% in April and 652% in May 2020 compared with searches carried out the preceding year.

The surge in popularity of such an open-ended phrase like this reveals how unprepared many companies were for the abrupt shift to mass home-working. The most popular surveillance tools are Time Doctor, Hubstaff, and FlexiSPY. The tools with the biggest increase in demand include Teramind, DeskTime, Kickidler, and Time Doctor, with interest for the latter tripling compared to the pre-pandemic levels. The top three tools account for almost 60% of global demand in surveillance software because of the range of features offered. The radical shift away from office-working has clearly made employers nervous about a reduction in productivity and its potential impact on their business. Greater surveillance, however, may actually reduce long-term productivity. Your boss watching your every move may make you less productive in the long run and could significantly impact your feelings about the company itself.

394

LidarPhone Attack Converts Smart Vacuums Into Microphones

A team of academics has detailed this week novel research that converted a smart vacuum cleaner into a microphone capable of recording nearby conversations. Named LidarPhone, the technique works by taking the vacuum’s built-in LiDAR laser-based navigational component and converting it into a laser microphone. […] They tested the LidarPhone attack with various objects, by varying the distance between the robot and the object, and the distance between the sound origin and the object. Tests focused on recovering numerical values, which the research team said they managed to recover with a 90% accuracy. But academics said the technique could also be used to identify speakers based on gender or even determine their political orientation from the music played during news shows, captured by the vacuum’s LiDAR.

But while the LidarPhone attack sounds like a gross invasion of privacy, users need not panic for the time being. This type of attack revolves around many prerequisites that most attacks won’t bother. There are far easier ways of spying on users than overwriting a vacuum’s firmware to control its laser navigation system, such as tricking the user on installing malware on their phone. The LidarPhone attack is merely novel academic research that can be used to bolster the security and design of future smart vacuum robots. In fact, the research team’s main recommended countermeasure for smart vacuum cleaning robot makers is to shut down the LiDAR component if it’s not rotating. Additional details about the research are available in a research paper titled “Spying with Your Robot Vacuum Cleaner: Eavesdropping via Lidar Sensors.”

385

Cheating-Detection Software Provokes ‘School-Surveillance Revolt’

New webcam-based anti-cheating monitoring is so stressful, it’s made some students cry, the Washington Post reports.

“Online proctoring” companies saw in coronavirus shutdowns a chance to capitalize on a major reshaping of education, selling schools a high-tech blend of webcam-watching workers and eye-tracking software designed to catch students cheating on their exams. They’ve taken in millions of dollars, some of it public money, from thousands of colleges in recent months. But they’ve also sparked a nationwide school-surveillance revolt, with students staging protests and adopting creative tactics to push campus administrators to reconsider the deals. Students argue that the testing systems have made them afraid to click too much or rest their eyes for fear they’ll be branded as cheats…

One system, Proctorio, uses gaze-detection, face-detection and computer-monitoring software to flag students for any “abnormal” head movement, mouse movement, eye wandering, computer window resizing, tab opening, scrolling, clicking, typing, and copies and pastes. A student can be flagged for finishing the test too quickly, or too slowly, clicking too much, or not enough. If the camera sees someone else in the background, a student can be flagged for having “multiple faces detected.” If someone else takes the test on the same network — say, in a dorm building — it’s potential “exam collusion.” Room too noisy, Internet too spotty, camera on the fritz? Flag, flag, flag.

As an unusually disrupted fall semester churns toward finals, this student rebellion has erupted into online war, with lawsuits, takedowns and viral brawls further shaking the anxiety-inducing backdrop of college exams. Some students have even tried to take the software down from the inside, digging through the code for details on how it monitors millions of high-stakes exams… Some students said the experience of having strangers and algorithms silently judge their movements was deeply unnerving, and many worried that even being accused of cheating could endanger their chances at good grades, scholarships, internships and post-graduation careers. Several students said they had hoped for freeing, friend-filled college years but were now resigned to hours of monitored video exams in their childhood bedrooms, with no clear end in sight….

[T]he systems’ technical demands have made just taking the tests almost comically complicated. One student at Wilfrid Laurier University in Ontario shared the instructions for his online Introduction to Linear Algebra midterm: five pages, totaling more than 2,000 words, requiring students to use a special activity-monitoring Web browser and keep their face, hands and desk in view of their camera at all times…

Students who break the rules or face technical difficulties can be investigated for academic misconduct. “The instructions,” the student said, “are giving me more anxiety than the test itself.”

Company executives “say a semester without proctors would turn online testing into a lawless wasteland” according to the article. But one long-time teacher counters that “the most clear value conveyed to students is ‘We don’t trust you.'”

Yet the education tech nonprofit Educause reported that 54% of higher education institutions they’d surveyed “are currently using online or remote proctoring services.

“And another 23% are planning or considering using them.”

399

Police Will Pilot a Program to Live-Stream Amazon Ring Cameras

The police surveillance center in Jackson, Mississippi, will be conducting a 45-day pilot program to live stream the Amazon Ring cameras of participating residents.

While people buy Ring cameras and put them on their front door to keep their packages safe, police use them to build comprehensive CCTV camera networks blanketing whole neighborhoods. This serves two police purposes. First, it allows police departments to avoid the cost of buying surveillance equipment and to put that burden onto consumers by convincing them they need cameras to keep their property safe. Second, it evades the natural reaction of fear and distrust that many people would have if they learned police were putting up dozens of cameras on their block, one for every house.

Now, our worst fears have been confirmed. Police in Jackson, Mississippi, have started a pilot program that would allow Ring owners to patch the camera streams from their front doors directly to a police Real Time Crime Center. The footage from your front door includes you coming and going from your house, your neighbors taking out the trash, and the dog walkers and delivery people who do their jobs in your street. In Jackson, this footage can now be live streamed directly onto a dozen monitors scrutinized by police around the clock. Even if you refuse to allow your footage to be used that way, your neighbor’s camera pointed at your house may still be transmitting directly to the police.

426

Google is Giving Data To Police Based on Search Keywords, Court Docs Show

There are few things as revealing as a person’s search history, and police typically need a warrant on a known suspect to demand that sensitive information. But a recently unsealed court document found that investigators can request such data in reverse order by asking Google to disclose everyone who searched a keyword rather than for information on a known suspect.

In August, police arrested Michael Williams, an associate of singer and accused sex offender R. Kelly, for allegedly setting fire to a witness’ car in Florida. Investigators linked Williams to the arson, as well as witness tampering, after sending a search warrant to Google that requested information on “users who had searched the address of the residence close in time to the arson.”

The July court filing was unsealed on Tuesday. Detroit News reporter Robert Snell tweeted about the filing after it was unsealed. Court documents showed that Google provided the IP addresses of people who searched for the arson victim’s address, which investigators tied to a phone number belonging to Williams. Police then used the phone number records to pinpoint the location of Williams’ device near the arson, according to court documents. The original warrant sent to Google is still sealed, but the report provides another example of a growing trend of data requests to the search engine giant in which investigators demand data on a large group of users rather than a specific request on a single suspect. “This ‘keyword warrant’ evades the Fourth Amendment checks on police surveillance,” said Albert Fox Cahn, the executive director of the Surveillance Technology Oversight Project. “When a court authorizes a data dump of every person who searched for a specific term or address, it’s likely unconstitutional.”

430

Police Are Using Facial Recognition For Minor Crimes, ‘Because They Can’

In a recent court filing, the New York police department noted that it’s turned to facial recognition in more than 22,000 cases in the last three years. “Even though the NYPD claims facial recognition is only used for serious crimes, the numbers tell a different story,” said Albert Fox Cahn, the executive director of the Surveillance Technology Oversight Project. “As facial recognition continues to grow, it’s being routinely deployed for everything from shoplifting to graffiti.”

Asked for comment, an NYPD spokeswoman pointed to a 2019 opinion article by police commissioner James O’Neill titled “How Facial Recognition Makes You Safer.” In the piece, O’Neill talked about how facial recognition had been used to make arrests in murder, robbery and rape cases, but he didn’t disclose how often it was used for low-level crimes. The department’s facial recognition policy, established in March, allows the technology to be used for any crime, no matter the severity. Without any limits, police have more frequently used the technology for petty thefts than the dangerous crimes, privacy advocates say. Before Amazon put a moratorium on police use of its Rekognition face-identifying software, the program was used in a $12 shoplifting case in Oregon in 2018…

Without any limits, police can use facial recognition however they please, and in many cases, arrested suspects don’t even know that the flawed technology was used… Attorneys representing protesters in Miami didn’t know that police used facial recognition in their arrests, according to an NBC Miami report. Police used facial recognition software in a $50 drug dealing case in Florida in 2016 but made no mention of it in the arrest report.

The article also notes that as recently as this Tuesday, Hoan Ton-That, the CEO of facial recognition startup Clearview AI “said it isn’t the company’s responsibility to make sure its technology is being properly used by its thousands of police partners.

“Though the company has its own guidelines, Ton-That said Clearview AI wouldn’t be enforcing them, saying that ‘it’s not our job to set the policy as a tech company…'”

445

Surveillance Startup Used Own Cameras To Harass Coworkers

Verkada, a fast-growing Silicon Valley surveillance startup, equips its offices in downtown San Mateo, California, with its own state-of-the-art security cameras. Last year, a sales director on the company’s sales team abused their access to these cameras to take and post photos of colleagues in a Slack channel called #RawVerkadawgz where they made sexually explicit jokes about women who worked at the company, according to a report in IPVM, which Motherboard independently verified and obtained more information about. “Face match… find me a squirt,” the sales director wrote in the company Slack channel in August 2019, according to one screenshot obtained by Motherboard. The comment was posted along with a series of photos of employees’ faces captured with the office’s surveillance system which were patched together using a Verkada facial recognition feature. “Face search,” as it’s called, can pinpoint an individual in a sea of faces. The pinpointed face, in this instance, belonged to a Verkada employee, her mouth wide open. In addition to verifying the incident with three sources who worked at Verkada at the time, Motherboard compared the format of the images posted to those included in Verkada’s publicly available demo videos to verify that they were indeed captured by the company’s surveillance cameras.

382

When is targeted surveillance wrong?

For many of us, that unsettling feeling of being watched is all too real. After all, we live in a world of mass surveillance, from facial recognition to online tracking – governments and tech companies are harvesting intimate information about billions of people. Targeted surveillance is slightly different. It’s the use of technology to spy on specific people.

You may think this is fine, because aren’t people only targeted when they’ve done something wrong? Think again.

From Mexico to the Middle East, governments are wielding a range of sophisticated cyber-tools to unlawfully spy on their critics. A seemingly innocuous missed call, a personalized text message or unknowingly redirected to malicious website for a split second, and without you being aware the spyware is installed.

The people targeted are often journalists, bloggers and activists (including Amnesty’s own staff) voicing inconvenient truths. They may be exposing corrupt deals, demanding electoral reform, or promoting the right to privacy. Their defence of human rights puts them at odds with their governments. Rather than listen, governments prefer to shut them down. And when governments attack the people who are defending our rights, then we’re all at risk.

The authorities use clever cyber-attacks to access users’ phones and computers. Once in, they can find out who their contacts are, their passwords, their social media habits, their texts. They can record conversations. They can find out everything about that person, tap into their network, find out about their work, and destroy it. Since 2017, Amnesty’s own research has uncovered attacks like these in Egypt, India, Morocco, Pakistan, Saudi Arabia, UAE, Qatar and Uzbekistan.

Remember, the users we’re talking about are human rights activists, among them journalists, bloggers, poets, teachers and so many others who bravely take a stand for justice, equality and freedom. They take these risks so we don’t have to. But voicing concerns about government conduct and policy makes them unpopular with the authorities. So much so that governments resort to dirty tricks, smearing activists and re-branding them as criminals and terrorists.

Some of the most insidious attacks on human rights defenders have been waged using spyware manufactured by NSO Group. A major player in the shadowy surveillance industry, they specialise in cyber-surveillance tools.

NSO is responsible for Pegasus malware, a powerful programme that can turn on your phone’s microphone and camera without your knowledge. It can also access your emails and texts, track your keystrokes and collect data about you. The worst thing is you don’t have to do anything to trigger it – Pegasus can be installed without you ever knowing.

NSO say they’re creating technology that helps governments fight terrorism and crime. But as early as 2018, when one of our own staff was targeted through WhatsApp, our Security Lab discovered a network of more than 600 suspicious websites owned by NSO that could be used to spy on journalists and activists around the world. We were not wrong. In 2019, thousands of people received scam WhatsApp call, leading WhatsApp to later sue NSO. More recently we documented the cases of Moroccan activists who had been similarly targeted.

385

Police complain about surveillance “going dark” but they are frequently breaking encryption far more than previously known

In a new Apple ad, a man on a city bus announces he has just shopped for divorce lawyers. Then a woman recites her credit card number through a megaphone in a park. “Some things shouldn’t be shared,” the ad says, “iPhone helps keep it that way.” Apple has built complex encryption into iPhones and made the devices’ security central to its marketing pitch. That, in turn, has angered law enforcement. Officials from the F.B.I. director to rural sheriffs have argued that encrypted phones stifle their work to catch and convict dangerous criminals. They have tried to force Apple and Google to unlock suspects’ phones, but the companies say they can’t. In response, the authorities have put their own marketing spin on the problem. Law enforcement, they say, is “going dark.” Yet new data reveals a twist to the encryption debate that undercuts both sides: Law enforcement officials across the nation regularly break into encrypted smartphones.

That is because at least 2,000 law enforcement agencies in all 50 states now have tools to get into locked, encrypted phones and extract their data, according to years of public records collected in a report by Upturn, a Washington nonprofit that investigates how the police use technology. At least 49 of the 50 largest U.S. police departments have the tools, according to the records, as do the police and sheriffs in small towns and counties across the country, including Buckeye, Ariz.; Shaker Heights, Ohio; and Walla Walla, Wash. And local law enforcement agencies that don’t have such tools can often send a locked phone to a state or federal crime lab that does. With more tools in their arsenal, the authorities have used them in an increasing range of cases, from homicides and rapes to drugs and shoplifting, according to the records, which were reviewed by The New York Times. Upturn researchers said the records suggested that U.S. authorities had searched hundreds of thousands of phones over the past five years. While the existence of such tools has been known for some time, the records show that the authorities break into phones far more than previously understood — and that smartphones, with their vast troves of personal data, are not as impenetrable as Apple and Google have advertised. While many in law enforcement have argued that smartphones are often a roadblock to investigations, the findings indicate that they are instead one of the most important tools for prosecutions.

395

Five Eyes Governments, India, and Japan Make New Call For Encryption Backdoors

Members of the intelligence-sharing alliance Five Eyes, along with government representatives for Japan and India, have published a statement over the weekend calling on tech companies to come up with a solution for law enforcement to access end-to-end encrypted communications. From a report:
The statement is the alliance’s latest effort to get tech companies to agree to encryption backdoors. The Five Eyes alliance, comprised of the US, the UK, Canada, Australia, and New Zealand, have made similar calls to tech giants in 2018 and 2019, respectively. Just like before, government officials claim tech companies have put themselves in a corner by incorporating end-to-end encryption (E2EE) into their products. If properly implemented, E2EE lets users have secure conversations — may them be chat, audio, or video — without sharing the encryption key with the tech companies. Representatives from the seven governments argue that the way E2EE encryption is currently supported on today’s major tech platforms prohibits law enforcement from investigating crime rings, but also the tech platforms themselves from enforcing their own terms of service. Signatories argue that “particular implementations of encryption technology” are currently posing challenges to law enforcement investigations, as the tech platforms themselves can’t access some communications and provide needed data to investigators.

450

London Installed AI Cameras To Monitor Social Distancing, Lockdown Restrictions

Artificial Intelligence cameras are being used in London and other cities in the UK to monitor social distancing. The sensors were initially developed by Vivacity to track the flow of traffic, cyclists and pedestrians and monitor how roads are being used. But when the country went into lockdown in March, Vivacity added on an extra feature to the AI scanners so it could register the distance between pedestrians. This data is shared in a monthly report with the Government.

Vivacity Labs said they have more than 1,000 sensors installed across the UK, in cities including London, Manchester, Oxford, Cambridge and Nottingham. Chief Operating Officer at Vivacity Peter Mildon told BBC Radio Kent on Wednesday that the data is potentially “useful for informing policy decisions” regarding lockdown measures. He stressed that the cameras are not CCTV but that they operate as a data collating device rather than a camera that stores footage. “They are not recording any footage, they are not streaming any footage and no one is actually watching it,” he said.

Mr Mildon added: “We’re creating a set of statistics on how behavior is changing in terms of how people are staying close together or apart. And it is that data that is then useful for informing policy decisions on whether there should be a two meter rule or a one meter plus rule or whether local lockdown measures are having the impact they are envisioned to.”

433

Google is Giving Data To Police Based on Search Keywords, Court Docs Show

There are few things as revealing as a person’s search history, and police typically need a warrant on a known suspect to demand that sensitive information. But a recently unsealed court document found that investigators can request such data in reverse order by asking Google to disclose everyone who searched a keyword rather than for information on a known suspect.

In August, police arrested Michael Williams, an associate of singer and accused sex offender R. Kelly, for allegedly setting fire to a witness’ car in Florida. Investigators linked Williams to the arson, as well as witness tampering, after sending a search warrant to Google that requested information on “users who had searched the address of the residence close in time to the arson.”

The July court filing was unsealed on Tuesday. Detroit News reporter Robert Snell tweeted about the filing after it was unsealed. Court documents showed that Google provided the IP addresses of people who searched for the arson victim’s address, which investigators tied to a phone number belonging to Williams. Police then used the phone number records to pinpoint the location of Williams’ device near the arson, according to court documents. The original warrant sent to Google is still sealed, but the report provides another example of a growing trend of data requests to the search engine giant in which investigators demand data on a large group of users rather than a specific request on a single suspect. “This ‘keyword warrant’ evades the Fourth Amendment checks on police surveillance,” said Albert Fox Cahn, the executive director of the Surveillance Technology Oversight Project. “When a court authorizes a data dump of every person who searched for a specific term or address, it’s likely unconstitutional.”

400