Resources

The FBI Is Using Push Notifications To Track Criminals

The Post did a little digging into court records and found evidence of at least 130 search warrants filed by the feds for push notification data in cases spanning 14 states. In those cases, FBI officials asked tech companies like Google, Apple, and Facebook to fork over data related to a suspect’s mobile notifications, then used the data to implicate the suspect in criminal behavior linked to a particular app, even though many of those apps were supposedly anonymous communication platforms, like Wickr.

How exactly is this possible? Push notifications, which are provided by a mobile operating system provider, include embedded metadata that can be examined to understand the use of the mobile apps on a particular phone. Apps come laced with a quiet identifier, a “push token,” which is stored on the corporate servers of a company like Apple or another phone manufacturer after a user signs up to use a particular app. Those tokens can later be used to identify the person using the app, based on the information associated with the device on which the app was downloaded. Even turning off push notifications on your device doesn’t necessarily disable this feature, experts contend. […]

If finding new ways to catch pedophiles and terrorists doesn’t seem like the worst thing in the world, the Post article highlights the voices of critics who fear that this kind of mobile data could be used to track people who have not committed serious crimes — like political activists or women seeking abortions in states where the procedure has been restricted.

17

How the Pentagon Learned To Use Targeted Ads To Find Its Targets

In 2019, a government contractor and technologist named Mike Yeagley began making the rounds in Washington, DC. He had a blunt warning for anyone in the country’s national security establishment who would listen: The US government had a Grindr problem. A popular dating and hookup app, Grindr relied on the GPS capabilities of modern smartphones to connect potential partners in the same city, neighborhood, or even building. The app can show how far away a potential partner is in real time, down to the foot. But to Yeagley, Grindr was something else: one of the tens of thousands of carelessly designed mobile phone apps that leaked massive amounts of data into the opaque world of online advertisers. That data, Yeagley knew, was easily accessible by anyone with a little technical know-how. So Yeagley — a technology consultant then in his late forties who had worked in and around government projects nearly his entire career — made a PowerPoint presentation and went out to demonstrate precisely how that data was a serious national security risk.

As he would explain in a succession of bland government conference rooms, Yeagley was able to access the geolocation data on Grindr users through a hidden but ubiquitous entry point: the digital advertising exchanges that serve up the little digital banner ads along the top of Grindr and nearly every other ad-supported mobile app and website. This was possible because of the way online ad space is sold, through near-instantaneous auctions in a process called real-time bidding. Those auctions were rife with surveillance potential. You know that ad that seems to follow you around the internet? It’s tracking you in more ways than one. In some cases, it’s making your precise location available in near-real time to both advertisers and people like Mike Yeagley, who specialized in obtaining unique data sets for government agencies.

Working with Grindr data, Yeagley began drawing geofences — creating virtual boundaries in geographical data sets — around buildings belonging to government agencies that do national security work. That allowed Yeagley to see what phones were in certain buildings at certain times, and where they went afterwards. He was looking for phones belonging to Grindr users who spent their daytime hours at government office buildings. If the device spent most workdays at the Pentagon, the FBI headquarters, or the National Geospatial-Intelligence Agency building at Fort Belvoir, for example, there was a good chance its owner worked for one of those agencies. Then he started looking at the movement of those phones through the Grindr data. When they weren’t at their offices, where did they go? A small number of them had lingered at highway rest stops in the DC area at the same time and in proximity to other Grindr users — sometimes during the workday and sometimes while in transit between government facilities. For other Grindr users, he could infer where they lived, see where they traveled, even guess at whom they were dating.

Intelligence agencies have a long and unfortunate history of trying to root out LGBTQ Americans from their workforce, but this wasn’t Yeagley’s intent. He didn’t want anyone to get in trouble. No disciplinary actions were taken against any employee of the federal government based on Yeagley’s presentation. His aim was to show that buried in the seemingly innocuous technical data that comes off every cell phone in the world is a rich story — one that people might prefer to keep quiet. Or at the very least, not broadcast to the whole world. And that each of these intelligence and national security agencies had employees who were recklessly, if obliviously, broadcasting intimate details of their lives to anyone who knew where to look. As Yeagley showed, all that information was available for sale, for cheap. And it wasn’t just Grindr, but rather any app that had access to a user’s precise location — other dating apps, weather apps, games. Yeagley chose Grindr because it happened to generate a particularly rich set of data and its user base might be uniquely vulnerable.
The report goes into great detail about how intelligence and data analysis techniques, notably through a program called Locomotive developed by PlanetRisk, enabled the tracking of mobile devices associated with Russian President Vladimir Putin’s entourage. By analyzing commercial adtech data, including precise geolocation information collected from mobile advertising bid requests, analysts were able to monitor the movements of phones that frequently accompanied Putin, indicating the locations and movements of his security personnel, aides, and support staff.

This capability underscored the surveillance potential of commercially available data, providing insights into the activities and security arrangements of high-profile individuals without directly compromising their personal devices.

15

USA Schools Are Normalising Intrusive Surveillance

As the authors detail, among the technologies are surveillance cameras. These are often linked to software for facial recognition, access control, behavior analysis, and weapon detection. That is, cameras scan student faces and then algorithms identify them, allow or deny them entry based on that ID, decide if their activities are threatening, and determine if objects they carry may be dangerous or forbidden.

“False hits, such as mistaking a broomstick, three-ring binder, or a Google Chromebook laptop for a gun or other type of weapon, could result in an armed police response to a school,” cautions the report.

That’s not a random assortment of harmless-until-misidentified items; a footnoted 2022 Charlotte Observer piece points out such objects were tagged as weapons by scanners in the Charlotte-Mecklenburg Schools. “A how-to video posted earlier this year by administrators at Butler High School instructs students to remove certain belongings from their backpacks — and walk through the scanner holding their laptops above their heads — to avoid setting off a false alarm,” it adds.

Huh. What happens if behavior analysis algorithms decide that brandished laptops are threatening?

Also called out is software that monitors social media, students’ communications, and web-surfing habits. Audio monitors that are supposed to detect gunshots—but can be triggered by slammed doors (as at Greenwood High School in Arkansas earlier this year)—also feature in many schools.

Of students aged 14–18 surveyed by the ACLU, 62 percent saw video cameras in their schools (the U.S. Department of Education says cameras are used by 91 percent of public schools), and 49 percent reported monitoring software. Understandably, this affects their behavior. Thirty-two percent say, “I always feel like I’m being watched,” and 26 percent fret over what their “school and the companies they contract with do with the data.”

“Research demonstrates the damaging effect of surveillance on children’s ability to develop in healthy ways,” Fedders added. “Pervasive surveillance can create a climate in which adults are seen as overestimating and overreacting to risk. Children, in turn, cannot develop the ability to evaluate and manage risk themselves in order to function effectively.”

Notably, school surveillance normalizes the idea that constant monitoring is good and necessary for preserving safety.

69

School surveillance tech does more harm than good, ACLU report finds

An ACLU report has found that despite claims from companies, surveillance technology in US schools does not improve student safety and constant surveillance can, in fact, cause a number of harms to students including making students less likely to report dangerous behavior.

Schools typically use technologies such as cameras, facial recognition software and communication monitoring and filtering technology, which have been marketed by education technology surveillance companies as intervention tools against school shootings, suicides and bullying. In 2021, US schools and colleges spent $3.1bn on these products and this number is expected to grow by 8% every year, according to the report.

But the ACLU’s report concludes that there is little to no independent research or evidence that supports that this technology works.

69

Signal President Says AI is Fundamentally ‘a Surveillance Technology’

Why is it that so many companies that rely on monetizing the data of their users seem to be extremely hot on AI? If you ask Signal president Meredith Whittaker (and I did), she’ll tell you it’s simply because “AI is a surveillance technology.” Onstage at TechCrunch Disrupt 2023, Whittaker explained her perspective that AI is largely inseparable from the big data and targeting industry perpetuated by the likes of Google and Meta, as well as less consumer-focused but equally prominent enterprise and defense companies. “It requires the surveillance business model; it’s an exacerbation of what we’ve seen since the late ’90s and the development of surveillance advertising. AI is a way, I think, to entrench and expand the surveillance business model,” she said.

“The Venn diagram is a circle.” “And the use of AI is also surveillant, right?” she continued. “You know, you walk past a facial recognition camera that’s instrumented with pseudo-scientific emotion recognition, and it produces data about you, right or wrong, that says ‘you are happy, you are sad, you have a bad character, you’re a liar, whatever.’ These are ultimately surveillance systems that are being marketed to those who have power over us generally: our employers, governments, border control, etc., to make determinations and predictions that will shape our access to resources and opportunities.”

75

Internet-Connected Cars Fail Privacy and Security Tests

Mozilla found brands including BMW, Ford, Toyota, Tesla, and Subaru collect data about drivers including race, facial expressions, weight, health information, and where you drive. Some of the cars tested collected data you wouldn’t expect your car to know about, including details about sexual activity, race, and immigration status, according to Mozilla. […] The worst offender was Nissan, Mozilla said. The carmaker’s privacy policy suggests the manufacturer collects information including sexual activity, health diagnosis data, and genetic data, though there’s no details about how exactly that data is gathered. Nissan reserves the right to share and sell “preferences, characteristics, psychological trends, predispositions, behavior, attitudes, intelligence, abilities, and aptitudes” to data brokers, law enforcement, and other third parties.

Other brands didn’t fare much better. Volkswagen, for example, collects your driving behaviors such as your seatbelt and braking habits and pairs that with details such as age and gender for targeted advertising. Kia’s privacy policy reserves the right to monitor your “sex life,” and Mercedes-Benz ships cars with TikTok pre-installed on the infotainment system, an app that has its own thicket of privacy problems. The privacy and security problems extend beyond the nature of the data car companies siphon off about you. Mozilla said it was unable to determine whether the brands encrypt any of the data they collect, and only Mercedes-Benz responded to the organization’s questions.

Mozilla also found that many car brands engage in “privacy washing,” or presenting consumers with information that suggests they don’t have to worry about privacy issues when the exact opposite is true. Many leading manufacturers are signatories to the Alliance for Automotive Innovation’s “Consumer Privacy Protection Principles (PDF).” According to Mozilla, these are a non-binding set of vague promises organized by the car manufacturers themselves. Questions around consent are essentially a joke as well. Subaru, for example, says that by being a passenger in the car, you are considered a “user” who has given the company consent to harvest information about you. Mozilla said a number of car brands say it’s the drivers responsibility to let passengers know about their car’s privacy policies — as if the privacy policies are comprehensible to drivers in the first place. Toyota, for example, has a constellation of 12 different privacy policies for your reading pleasure.

70

Cellebrite Asks Cops To Keep Its Phone Hacking Tech ‘Hush Hush’

For years, cops and other government authorities all over the world have been using phone hacking technology provided by Cellebrite to unlock phones and obtain the data within. And the company has been keen on keeping the use of its technology “hush hush.” As part of the deal with government agencies, Cellebrite asks users to keep its tech — and the fact that they used it — secret, TechCrunch has learned. This request concerns legal experts who argue that powerful technology like the one Cellebrite builds and sells, and how it gets used by law enforcement agencies, ought to be public and scrutinized.

In a leaked training video for law enforcement customers that was obtained by TechCrunch, a senior Cellebrite employee tells customers that “ultimately, you’ve extracted the data, it’s the data that solves the crime, how you got in, let’s try to keep that as hush hush as possible.” “We don’t really want any techniques to leak in court through disclosure practices, or you know, ultimately in testimony, when you are sitting in the stand, producing all this evidence and discussing how you got into the phone,” the employee, who we are not naming, says in the video.

64

US Spy Agencies Will Start Sharing More Cyber-Threat Intelligence with Private Companies

U.S. spy agencies will share more intelligence with U.S. companies, nongovernmental organizations and academia under a new strategy released this week that acknowledges concerns over new threats, such as another pandemic and increasing cyberattacks. The National Intelligence Strategy, which sets broad goals for the sprawling U.S. intelligence community, says that spy agencies must reach beyond the traditional walls of secrecy and partner with outside groups to detect and deter supply-chain disruptions, infectious diseases and other growing transnational threats. The intelligence community “must rethink its approach to exchanging information and insights,” the strategy says.

The U.S. government in recent years has begun sharing vast amounts of cyber-threat intelligence with U.S. companies, utilities and others who are often the main targets of foreign hackers, as well as information on foreign-influence operations with social-media companies… The emphasis on greater intelligence sharing is part of a broader trend toward declassification that the Biden administration has pursued.

76

Your School’s Next Security Guard May Be an AI-Enabled Robot

When Lori Andrews attended her daughter’s graduation at Santa Fe High School, she spotted a 5-foot-10, 400-pound robot roaming the football field alongside the newest alumni.

Andrews, a visual arts teacher at the school, said she initially thought the robot was taking photos of the graduates. She was taken aback when her husband described it as a police robot and she learned that it was providing 360-degree camera footage to the school security team.

“My reaction was, ‘Yuck,’” Andrews said. “What is it filming, what kind of camera is on it?”

The New Mexico school district started a pilot program in mid-June with the robot, which patrols the multi-building campus grounds 24 hours a day, seven days a week.

Amid growing concerns about gun violence and mass shootings in schools, several companies are starting to offer similar robots to schools across the country. Few schools have deployed the machines thus far, primarily for campus surveillance. But they have the potential to do much more, including potentially confronting attackers and others who come onto campuses without permission.

Using artificial intelligence, the robot in Santa Fe learns the school’s normal patterns of activity and detects individuals who are on campus after hours or are displaying aggressive behavior, said Andy Sanchez, who manages sales for Team 1st Technologies, the robot’s distributor in North America.

In the case of an active shooter or other threat, the robot could alert the security team, Sanchez said. It could move toward the intruder and transmit video footage that informs the officers’ course of action, he said. The robot isn’t armed but can confront intruders, and human security team members would be able to speak to the intruder through the robot’s communication system.

The school chose to disable the robot’s weapons detection features during the pilot, although the security team is determining whether it might be added at a later time, said Mario Salbidrez, executive director of safety and security at Santa Fe Public Schools. Members of the district security team and the high school are responsible for reviewing video footage when the robot sends alerts about unusual activity.

The robot doesn’t have facial recognition features, and Santa Fe High School owns the robot’s video footage, meaning it can decide whether or not to save it, Sanchez said.

The robot hasn’t yet detected intruders on campus, but it has alerted the security team to new workers entering the school’s construction site and individuals attempting to open locked doors in harmless attempts to enter buildings, Salbidrez said. Its cameras have also caught faculty members waving to the cameras and students making peace signs in passing, he added.

Callie Trader, a rising senior at Santa Fe High School, said she is unfazed by additional surveillance on campus. She said she isn’t sure students will take the robot seriously, and she doesn’t think the robot will change students’ behavior any more than existing security cameras do.

“I think it will just be funnier, just different,” she said.

Reed Meschefske, a film studies and acting and drama teacher at Santa Fe High School, said that he already feels safe at school without the new surveillance measures. But the high school is large, and the robot, which he described as a “seven camera dog,” could help cover blind spots on campus that currently go undetected, he said.

Other districts are considering robots in a security role. Robert Stokes, co-owner and president of Stokes Robotics, said his company is working with multiple districts across the country. In most cases, schools will use robots in the classroom to teach students about coding, Stokes said. But in the face of an armed intruder, the robot could take more aggressive action, pointing a laser beam at a suspect’s chest or using flashing lights to try to induce them to drop their weapons.

Humans would be responsible for deciding the robot’s course of action in real-time but could remain out of the line of fire in the case of an active shooter, Stokes said.

Brad Wade, superintendent of Wyandotte Public Schools in Oklahoma, said the district hopes to introduce four robots from Stokes Robotics in the fall. The district is primarily considering robots with video cameras that could monitor the doorways of school buildings, although the robots that can directly confront intruders aren’t out of the question, Wade added.

New technology may create the appearance of making campuses safer, said Kenneth Trump, president of the Ohio-based consulting firm National School Safety and Security Services. But schools should first focus on teaching students how to inform a trusted adult about suspicious incidents on campus, he said.

“There’s a difference between doing something that’s impactful versus doing something for the sake of doing something,” Trump said. “We need to make sure that we master kindergarten before we’re looking for Ph.D. solutions to school safety.”

Team 1st Technologies is piloting the robot at Santa Fe High School free of charge for the summer. The cost for the 2023-24 school year is estimated to be around $60,000 to $70,000, Salbidrez said. The school is still determining if the robot is worth the investment, he said.

“At this point, I don’t have anything to say no to it,” Salbidrez said. “But I don’t have enough compelling information to say yes to it either.”

120

FBI Abused Spy Law 280,000 Times In a Year

The FBI misused surveillance powers granted by Section 702 of the Foreign Intelligence Surveillance Act (FISA) over 278,000 times between 2020 and early 2021 to conduct warrantless searches on George Floyd protesters, January 6 Capitol rioters, and donors to a congressional campaign, according to a newly unclassified court opinion. The Register reports:

On Friday, the US Foreign Intelligence Surveillance Court made public a heavily redacted April 2022 opinion [PDF] that details hundreds of thousands of violations of Section 702 of the Foreign Intelligence Surveillance Act (FISA) — the legislative instrument that allows warrantless snooping. The Feds were found to have abused the spy law in a “persistent and widespread” manner, according to the court, repeatedly failing to adequately justify the need to go through US citizens’ communications using a law aimed at foreigners.

The court opinion details FBI queries run on thousands of individuals between 2020 and early 2021. This includes 133 people arrested during the George Floyd protests and more than 19,000 donors to a congressional campaign. In the latter, “the analyst who ran the query advised that the campaign was a target of foreign influence, but NSD determined that only eight identifiers used in the query had sufficient ties to foreign influence activities to comply with the querying standard,” the opinion says, referring to the Justice Department’s National Security Division (NSD). In other words, there wasn’t a strong enough foreign link to fully justify the communications search.

For the Black Lives Matter protests, the division determined that the FBI queries “were not reasonably likely to retrieve foreign intelligence information or evidence of a crime.” Again, an overreach of foreign surveillance powers. Additional “significant violations of the querying standard” occurred in searched related to the January 6, 2021 breach of the US Capitol, domestic drug and gang investigations, and domestic terrorism probes, according to the court. It’s said that more than 23,000 queries were run on people suspected of storming the Capitol.

122

Supreme Court Declines To Hear Challenge To Warrantless Pole Camera Surveillance

The U.S. Supreme Court [Monday] declined to hear Moore v. United States, leaving in place a patchwork of lower court decisions on an important and recurring question about privacy rights in the face of advancing surveillance technology. In this case, police secretly attached a small camera to a utility pole, using it to surveil a Massachusetts home 24/7 for eight months — all without a warrant. Law enforcement could watch the camera’s feed in real time, and remotely pan, tilt, and zoom close enough to read license plates and see faces. They could also review a searchable, digitized record of this footage at their convenience. The camera captured every coming and going of the home’s residents and their guests over eight months. As a result, the government targeted the home of a community pillar — a lawyer, respected judicial clerk, devoted church member, and a grandmother raising her grandkids — to cherry-pick images from months of unceasing surveillance in an effort to support unwarranted criminal charges against an innocent person.

Federal courts of appeals and state supreme courts have divided on the question of whether such sweeping surveillance is a Fourth Amendment search requiring a warrant. The highest courts of Massachusetts, Colorado, and South Dakota have held that long-term pole camera surveillance of someone’s home requires a warrant. In Moore v. United States, the members of the full en banc U.S. Court of Appeals for the First Circuit split evenly on the question, with three judges explaining that a warrant is required, and three judges expressing the belief that the Fourth Amendment imposes no limit on this invasive surveillance. This issue will continue to arise in the lower courts; the ACLU filed an amicus brief on the question in the U.S. Court of Appeals for the Tenth Circuit earlier this month.

123

Researchers Are Getting Eerily Good at Using WiFi to ‘See’ People Through Walls in Detail

Researchers at Carnegie Mellon University developed a method for detecting the three dimensional shape and movements of human bodies in a room, using only WiFi routers. From a report:
To do this, they used DensePose, a system for mapping all of the pixels on the surface of a human body in a photo. DensePose was developed by London-based researchers and Facebook’s AI researchers. From there, according to their recently-uploaded preprint paper published on arXiv, they developed a deep neural network that maps WiFi signals’ phase and amplitude sent and received by routers to coordinates on human bodies. Researchers have been working on “seeing” people without using cameras or expensive LiDAR hardware for years. In 2013, a team of researchers at MIT found a way to use cell phone signals to see through walls; in 2018, another MIT team used WiFi to detect people in another room and translate their movements to walking stick-figures.

121

Mysterious Company With Government Ties Plays Key Internet Role

An offshore company that is trusted by the major web browsers and other tech companies to vouch for the legitimacy of websites has connections to contractors for U.S. intelligence agencies and law enforcement, according to security researchers, documents and interviews. Google’s Chrome, Apple’s Safari, nonprofit Firefox and others allow the company, TrustCor Systems, to act as what’s known as a root certificate authority, a powerful spot in the internet’s infrastructure that guarantees websites are not fake, guiding users to them seamlessly.

The company’s Panamanian registration records show that it has the identical slate of officers, agents and partners as a spyware maker identified this year as an affiliate of Arizona-based Packet Forensics, which public contracting records and company documents show has sold communication interception services to U.S. government agencies for more than a decade. One of those TrustCor partners has the same name as a holding company managed by Raymond Saulino, who was quoted in a 2010 Wired article as a spokesman for Packet Forensics. Saulino also surfaced in 2021 as a contact for another company, Global Resource Systems, that caused speculation in the tech world when it briefly activated and ran more than 100 million previously dormant IP addresses assigned decades earlier to the Pentagon. The Pentagon reclaimed the digital territory months later, and it remains unclear what the brief transfer was about, but researchers said the activation of those IP addresses could have given the military access to a huge amount of internet traffic without revealing that the government was receiving it.

168

Google is Quietly Working on a Wearable Device for Preteens

Google is developing a wearable device for preteens under its Fitbit group as it attempts to capture a growing demographic of younger users who own wearable tech, three employees familiar with the project told Insider.

Internally code-named “Project Eleven,” the wearable is designed to help older kids form healthy relationships with their phones and social media, two of the employees said. One of them said the device could include safety features that would let parents contact their children and know their whereabouts.

Project Eleven may be an opportunity to capture a growing market of younger users who would otherwise grow up to become Apple loyalists.

160

New Mac App Wants To Record Everything You Do – So You Can ‘Rewind’ It Later

Yesterday, a company called Rewind AI announced a self-titled software product for Macs with Apple Silicon that reportedly keeps a highly compressed, searchable record of everything you do locally on your Mac and lets you “rewind” time to see it later. If you forget something you’ve “seen, said, or heard,” Rewind wants to help you find it easily. Rewind AI claims its product stores all recording data locally on your machine and does not require cloud integration. Among its promises, Rewind will reportedly let you rewind Zoom meetings and pull information from them in a searchable form. In a video demo on Rewind.AI’s site, the app opens when a user presses Command+Shift+Space. The search bar suggests typing “anything you’ve seen, said, or heard.” It also shows a timeline at the bottom of the screen that represents previous actions in apps.

After searching for “tps reports,” the video depicts a grid view of every time Rewind has encountered the phrase “tps reports” as audio or text in any app, including Zoom chats, text messages, emails, Slack conversations, and Word documents. It describes filtering the results by app — and the ability to copy and paste from these past instances if necessary. Founded by Dan Siroker and Brett Bejcek, Rewind AI is composed of a small remote team located in various cities around the US. Portions of the company previously created Scribe, a precursor to Rewind that received some press attention in 2021. In an introductory blog post, Rewind AI co-founder Dan Siroker writes, “What if we could use technology to augment our memory the same way a hearing aid can augment our hearing?”

151

Ring Cameras Are Being Used To Control and Surveil Overworked Delivery Workers

Networked doorbell surveillance cameras like Amazon’s Ring are everywhere, and have changed the nature of delivery work by letting customers take on the role of bosses to monitor, control, and discipline workers, according to a recent report (PDF) by the Data & Society tech research institute. “The growing popularity of Ring and other networked doorbell cameras has normalized home and neighborhood surveillance in the name of safety and security,” Data & Society’s Labor Futures program director Aiha Nguyen and research analyst Eve Zelickson write. “But for delivery drivers, this has meant their work is increasingly surveilled by the doorbell cameras and supervised by customers. The result is a collision between the American ideas of private property and the business imperatives of doing a job.”

Thanks to interviews with surveillance camera users and delivery drivers, the researchers are able to dive into a few major developments interacting here to bring this to a head. Obviously, the first one is the widespread adoption of doorbell surveillance cameras like Ring. Just as important as the adoption of these cameras, however, is the rise of delivery work and its transformation into gig labor. […] As the report lays out, Ring cameras allow customers to surveil delivery workers and discipline their labor by, for example, sharing shaming footage online. This dovetails with the “gigification” of Amazon’s delivery workers in two ways: labor dynamics and customer behavior.

“Gig workers, including Flex drivers, are sold on the promise of flexibility, independence and freedom. Amazon tells Flex drivers that they have complete control over their schedule, and can work on their terms and in their space,” Nguyen and Zelickson write. “Through interviews with Flex drivers, it became apparent that these marketed perks have hidden costs: drivers often have to compete for shifts, spend hours trying to get reimbursed for lost wages, pay for wear and tear on their vehicle, and have no control over where they work.” That competition between workers manifests in other ways too, namely acquiescing to and complying with customer demands when delivering purchases to their homes. Even without cameras, customers have made onerous demands of Flex drivers even as the drivers are pressed to meet unrealistic and dangerous routes alongside unsafe and demanding productivity quotas. The introduction of surveillance cameras at the delivery destination, however, adds another level of surveillance to the gigification. […] The report’s conclusion is clear: Amazon has deputized its customers and made them partners in a scheme that encourages antagonistic social relations, undermines labor rights, and provides cover for a march towards increasingly ambitious monopolistic exploits.

162

ByteDance Planned to Use TikTok to Monitor Locations of Specific American Citizens

A China-based team at TikTok’s parent company, ByteDance, planned to use the TikTok app to monitor the personal location of some specific American citizens, according to materials reviewed by Forbes.

The team behind the monitoring project — ByteDance’s Internal Audit and Risk Control department — is led by Beijing-based executive Song Ye, who reports to ByteDance cofounder and CEO Rubo Liang. The team primarily conducts investigations into potential misconduct by current and former ByteDance employees. But in at least two cases, the Internal Audit team also planned to collect TikTok data about the location of a U.S. citizen who had never had an employment relationship with the company, the materials show.

It is unclear from the materials whether data about these Americans was actually collected; however, the plan was for a Beijing-based ByteDance team to obtain location data from U.S. users’ devices.

172

Beijing Bus Drivers Have Been Told To Wear Wristbands To Monitor Their Emotions

The move was initiated by the state-run Beijing Public Transport Holding Group, which says it is aimed at protecting public safety. But legal experts have raised privacy concerns and say the wristbands could cause bus drivers undue distress and potentially lead to discrimination. Some 1,800 wristbands were distributed to bus drivers on cross-province and highway routes on Wednesday, the official Beijing Daily reported. It is unclear how many drivers will be required to wear the devices. The report said they would be used to monitor the drivers’ vital signs and emotional state in real time to improve safety.

150

TikTok Tracks You Across the Web, Even If You Don’t Use the App

A Consumer Reports investigation finds that TikTok, one of the country’s most popular apps, is partnering with a growing number of other companies to hoover up data about people as they travel across the internet. That includes people who don’t have TikTok accounts. These companies embed tiny TikTok trackers called “pixels” in their websites. Then TikTok uses the information gathered by all those pixels to help the companies target ads at potential customers, and to measure how well their ads work. To look into TikTok’s use of online tracking, CR asked the security firm Disconnect to scan about 20,000 websites for the company’s pixels. In our list, we included the 1,000 most popular websites overall, as well as some of the biggest sites with domains ending in “.org,” “.edu,” and “.gov.” We wanted to look at those sites because they often deal with sensitive subjects. We found hundreds of organizations sharing data with TikTok.

If you go to the United Methodist Church’s main website, TikTok hears about it. Interested in joining Weight Watchers? TikTok finds that out, too. The Arizona Department of Economic Security tells TikTok when you view pages concerned with domestic violence or food assistance. Even Planned Parenthood uses the trackers, automatically notifying TikTok about every person who goes to its website, though it doesn’t share information from the pages where you can book an appointment. (None of those groups responded to requests for comment.) The number of TikTok trackers we saw was just a fraction of those we observed from Google and Meta. However, TikTok’s advertising business is exploding, and experts say the data collection will probably grow along with it.

After Disconnect researchers conducted a broad search for TikTok trackers, we asked them to take a close look at what kind of information was being shared by 15 specific websites. We focused on sites where we thought people would have a particular expectation of privacy, such as advocacy organizations and hospitals, along with retailers and other kinds of companies. Disconnect found that data being transmitted to TikTok can include your IP address, a unique ID number, what page you’re on, and what you’re clicking, typing, or searching for, depending on how the website has been set up. What does TikTok do with all that information? “Like other platforms, the data we receive from advertisers is used to improve the effectiveness of our advertising services,” says Melanie Bosselait, a TikTok spokesperson. The data “is not used to group individuals into particular interest categories for other advertisers to target.” If TikTok receives data about someone who doesn’t have a TikTok account, the company only uses that data for aggregated reports that they send to advertisers about their websites, she says. There’s no independent way for consumers or privacy researchers to verify such statements. But TikTok’s terms of service say its advertising customers aren’t allowed to send the company certain kinds of sensitive information, such as data about children, health conditions, or finances. “We continuously work with our partners to avoid inadvertent transmission of such data,” TikTok’s Bosselait says.

161

America’s Funniest Home Surveillance Network Isn’t Funny

Amazon is normalizing neighborhood panopticons by turning its doorbell videos into a TV show. Orwell wouldn’t be laughing.

When smartphones first came on the scene, their built-in cameras were limited to personal use. Then social media sites like Facebook and Instagram created a beast that millions wanted to feed, and photos became a public spectacle. The same phenomenon is happening to doorbell cameras. Initially marketed to make customers feel safer in their homes, their footage is now being uploaded for entertainment. On TikTok, the hashtag Ringdoorbell has more than 2.7 billion views.

Amazon.com Inc., which owns market-dominating Ring, has seen and grabbed a lucrative opportunity, and is contributing to the gradual erosion of our privacy in the process.

On Monday, the company premiered Ring Nation, a television show syndicated across more than 70 American cities. Hosted by the comedian Wanda Sykes and produced by Metro-Goldwyn-Mayer, which Amazon finished buying in March, the 20-minute program features videos captured on smartphones and Amazon’s Ring doorbell cameras, which the company sells for about $105.

157