Resources

Google Sold Android Phones With Hidden Insecure Feature, Companies Find

Google’s master software for some Android phones includes a hidden feature that is insecure and could be activated to allow remote control or spying on users, according to a security company that found it inside phones at a U.S. intelligence contractor. From a report:
The feature appears intended to give employees at stores selling Pixel phones and other models deep access to the devices so they can demonstrate how they work, according to researchers at iVerify who shared their findings with The Washington Post. The discovery and Google’s lack of explanation alarmed the intelligence contractor, data analysis platform vendor Palantir Technologies, to the extent that it has stopped issuing Android phones to employees, Palantir told The Post.

“Mobile security is a very real concern for us, given where we’re operating and who we’re serving,” Palantir Chief Information Security Officer Dane Stuckey said. “This was very deleterious of trust, to have third-party, unvetted insecure software on it. We have no idea how it got there, so we made the decision to effectively ban Androids internally.” The security company said it contacted Google about its findings more than 90 days ago and that the tech giant has not indicated whether it would remove or fix the application. On Wednesday night, Google told The Post that it would issue an update to remove the application. “Out of an abundance of precaution, we will be removing this from all supported in-market Pixel devices with an upcoming Pixel software update,” said company spokesperson Ed Fernandez. He said distributors of other Android phones would also be notified.

44

Microsoft Data-Scraping Windows Recall Feature

Recall is a Windows service that runs in the background on compatible PCs, continuously taking screenshots of user activity, scanning those screenshots with optical character recognition (OCR), and saving the OCR text and the screenshots to a giant searchable database on your PC. The goal, according to Microsoft, is to help users retrace their steps and dig up information about things they had used their PCs to find or do in the past.

68

The satellites using radar to peer at Earth in minute detail

Synthetic aperture radar (SAR) allows satellites to bounce radar signals off the ground and interpret the echo – and it can even peer through clouds.

Clouds cover around two-thirds of the world at any one time, preventing conventional satellites from seeing much of the planet.

But now a declassified technology known as synthetic aperture radar (SAR) can be installed on satellites to “see” the Earth’s surface in the dark, through the clouds (or the smoke of wildfires), to provide a constant unobscured view of our planet, and show changes on the Earth’s surface in great detail.

Previously used to equip only a relatively small number of large commercial satellites, this technology is now being combined with constellations of inexpensive nanosatellites in low-Earth orbit by start-ups such as Iceye and Capella Space. The goal is to provide round-the-clock observation of nearly anywhere on the planet for everyone from non-governmental organisations, to military customers.

107

Taser Company Axon Is Selling AI That Turns Body Cam Audio Into Police Reports

Axon on Tuesday announced a new tool called Draft One that uses artificial intelligence built on OpenAI’s GPT-4 Turbo model to transcribe audio from body cameras and automatically turn it into a police report. Axon CEO Rick Smith told Forbes that police officers will then be able to review the document to ensure accuracy. From the report:
Axon claims one early tester of the tool, Fort Collins Colorado Police Department, has seen an 82% decrease in time spent writing reports. “If an officer spends half their day reporting, and we can cut that in half, we have an opportunity to potentially free up 25% of an officer’s time to be back out policing,” Smith said. These reports, though, are often used as evidence in criminal trials, and critics are concerned that relying on AI could put people at risk by depending on language models that are known to “hallucinate,” or make things up, as well as display racial bias, either blatantly or unconsciously.

“It’s kind of a nightmare,” said Dave Maass, surveillance technologies investigations director at the Electronic Frontier Foundation. “Police, who aren’t specialists in AI, and aren’t going to be specialists in recognizing the problems with AI, are going to use these systems to generate language that could affect millions of people in their involvement with the criminal justice system. What could go wrong?” Smith acknowledged there are dangers. “When people talk about bias in AI, it really is: Is this going to exacerbate racism by taking training data that’s going to treat people differently?” he told Forbes. “That was the main risk.”

Smith said Axon is recommending police don’t use the AI to write reports for incidents as serious as a police shooting, where vital information could be missed. “An officer-involved shooting is likely a scenario where it would not be used, and I’d probably advise people against it, just because there’s so much complexity, the stakes are so high.” He said some early customers are only using Draft One for misdemeanors, though others are writing up “more significant incidents,” including use-of-force cases. Axon, however, won’t have control over how individual police departments use the tools.

129

Can I phone a friend? How cops circumvent face recognition bans

Police in multiple major US cities have figured out a trick to circumvent their cities’ bans on facial recognition technology. Just ask a friend in a city without any such restrictions to do it for you.

It’s not immediately clear how widespread such side-stepping of facial recognition restrictions in the US may be. According to the Washington Post’s review of police documents in Austin, Texas, and San Francisco, however, it’s definitely something that’s gone on in at least two major US metropolitan areas.

WaPo reported over the weekend that SFPD and APD have both sought help from neighboring cop shops on occasion, though with different levels of success. San Francisco cops have reportedly sought facial recognition searches from other agencies on five occasions since 2019, but never got a match back; Austin police, on the other hand, have sent at least 13 searches since 2020.

“Some” of the searches from APD returned hits, and multiple suspects in Austin have been arrested and charged as a result of the searches, WaPo said.

117

The FBI Is Using Push Notifications To Track Criminals

The Post did a little digging into court records and found evidence of at least 130 search warrants filed by the feds for push notification data in cases spanning 14 states. In those cases, FBI officials asked tech companies like Google, Apple, and Facebook to fork over data related to a suspect’s mobile notifications, then used the data to implicate the suspect in criminal behavior linked to a particular app, even though many of those apps were supposedly anonymous communication platforms, like Wickr.

How exactly is this possible? Push notifications, which are provided by a mobile operating system provider, include embedded metadata that can be examined to understand the use of the mobile apps on a particular phone. Apps come laced with a quiet identifier, a “push token,” which is stored on the corporate servers of a company like Apple or another phone manufacturer after a user signs up to use a particular app. Those tokens can later be used to identify the person using the app, based on the information associated with the device on which the app was downloaded. Even turning off push notifications on your device doesn’t necessarily disable this feature, experts contend. […]

If finding new ways to catch pedophiles and terrorists doesn’t seem like the worst thing in the world, the Post article highlights the voices of critics who fear that this kind of mobile data could be used to track people who have not committed serious crimes — like political activists or women seeking abortions in states where the procedure has been restricted.

133

How the Pentagon Learned To Use Targeted Ads To Find Its Targets

In 2019, a government contractor and technologist named Mike Yeagley began making the rounds in Washington, DC. He had a blunt warning for anyone in the country’s national security establishment who would listen: The US government had a Grindr problem. A popular dating and hookup app, Grindr relied on the GPS capabilities of modern smartphones to connect potential partners in the same city, neighborhood, or even building. The app can show how far away a potential partner is in real time, down to the foot. But to Yeagley, Grindr was something else: one of the tens of thousands of carelessly designed mobile phone apps that leaked massive amounts of data into the opaque world of online advertisers. That data, Yeagley knew, was easily accessible by anyone with a little technical know-how. So Yeagley — a technology consultant then in his late forties who had worked in and around government projects nearly his entire career — made a PowerPoint presentation and went out to demonstrate precisely how that data was a serious national security risk.

As he would explain in a succession of bland government conference rooms, Yeagley was able to access the geolocation data on Grindr users through a hidden but ubiquitous entry point: the digital advertising exchanges that serve up the little digital banner ads along the top of Grindr and nearly every other ad-supported mobile app and website. This was possible because of the way online ad space is sold, through near-instantaneous auctions in a process called real-time bidding. Those auctions were rife with surveillance potential. You know that ad that seems to follow you around the internet? It’s tracking you in more ways than one. In some cases, it’s making your precise location available in near-real time to both advertisers and people like Mike Yeagley, who specialized in obtaining unique data sets for government agencies.

Working with Grindr data, Yeagley began drawing geofences — creating virtual boundaries in geographical data sets — around buildings belonging to government agencies that do national security work. That allowed Yeagley to see what phones were in certain buildings at certain times, and where they went afterwards. He was looking for phones belonging to Grindr users who spent their daytime hours at government office buildings. If the device spent most workdays at the Pentagon, the FBI headquarters, or the National Geospatial-Intelligence Agency building at Fort Belvoir, for example, there was a good chance its owner worked for one of those agencies. Then he started looking at the movement of those phones through the Grindr data. When they weren’t at their offices, where did they go? A small number of them had lingered at highway rest stops in the DC area at the same time and in proximity to other Grindr users — sometimes during the workday and sometimes while in transit between government facilities. For other Grindr users, he could infer where they lived, see where they traveled, even guess at whom they were dating.

Intelligence agencies have a long and unfortunate history of trying to root out LGBTQ Americans from their workforce, but this wasn’t Yeagley’s intent. He didn’t want anyone to get in trouble. No disciplinary actions were taken against any employee of the federal government based on Yeagley’s presentation. His aim was to show that buried in the seemingly innocuous technical data that comes off every cell phone in the world is a rich story — one that people might prefer to keep quiet. Or at the very least, not broadcast to the whole world. And that each of these intelligence and national security agencies had employees who were recklessly, if obliviously, broadcasting intimate details of their lives to anyone who knew where to look. As Yeagley showed, all that information was available for sale, for cheap. And it wasn’t just Grindr, but rather any app that had access to a user’s precise location — other dating apps, weather apps, games. Yeagley chose Grindr because it happened to generate a particularly rich set of data and its user base might be uniquely vulnerable.
The report goes into great detail about how intelligence and data analysis techniques, notably through a program called Locomotive developed by PlanetRisk, enabled the tracking of mobile devices associated with Russian President Vladimir Putin’s entourage. By analyzing commercial adtech data, including precise geolocation information collected from mobile advertising bid requests, analysts were able to monitor the movements of phones that frequently accompanied Putin, indicating the locations and movements of his security personnel, aides, and support staff.

This capability underscored the surveillance potential of commercially available data, providing insights into the activities and security arrangements of high-profile individuals without directly compromising their personal devices.

119

USA Schools Are Normalising Intrusive Surveillance

As the authors detail, among the technologies are surveillance cameras. These are often linked to software for facial recognition, access control, behavior analysis, and weapon detection. That is, cameras scan student faces and then algorithms identify them, allow or deny them entry based on that ID, decide if their activities are threatening, and determine if objects they carry may be dangerous or forbidden.

“False hits, such as mistaking a broomstick, three-ring binder, or a Google Chromebook laptop for a gun or other type of weapon, could result in an armed police response to a school,” cautions the report.

That’s not a random assortment of harmless-until-misidentified items; a footnoted 2022 Charlotte Observer piece points out such objects were tagged as weapons by scanners in the Charlotte-Mecklenburg Schools. “A how-to video posted earlier this year by administrators at Butler High School instructs students to remove certain belongings from their backpacks — and walk through the scanner holding their laptops above their heads — to avoid setting off a false alarm,” it adds.

Huh. What happens if behavior analysis algorithms decide that brandished laptops are threatening?

Also called out is software that monitors social media, students’ communications, and web-surfing habits. Audio monitors that are supposed to detect gunshots—but can be triggered by slammed doors (as at Greenwood High School in Arkansas earlier this year)—also feature in many schools.

Of students aged 14–18 surveyed by the ACLU, 62 percent saw video cameras in their schools (the U.S. Department of Education says cameras are used by 91 percent of public schools), and 49 percent reported monitoring software. Understandably, this affects their behavior. Thirty-two percent say, “I always feel like I’m being watched,” and 26 percent fret over what their “school and the companies they contract with do with the data.”

“Research demonstrates the damaging effect of surveillance on children’s ability to develop in healthy ways,” Fedders added. “Pervasive surveillance can create a climate in which adults are seen as overestimating and overreacting to risk. Children, in turn, cannot develop the ability to evaluate and manage risk themselves in order to function effectively.”

Notably, school surveillance normalizes the idea that constant monitoring is good and necessary for preserving safety.

166

School surveillance tech does more harm than good, ACLU report finds

An ACLU report has found that despite claims from companies, surveillance technology in US schools does not improve student safety and constant surveillance can, in fact, cause a number of harms to students including making students less likely to report dangerous behavior.

Schools typically use technologies such as cameras, facial recognition software and communication monitoring and filtering technology, which have been marketed by education technology surveillance companies as intervention tools against school shootings, suicides and bullying. In 2021, US schools and colleges spent $3.1bn on these products and this number is expected to grow by 8% every year, according to the report.

But the ACLU’s report concludes that there is little to no independent research or evidence that supports that this technology works.

163

Signal President Says AI is Fundamentally ‘a Surveillance Technology’

Why is it that so many companies that rely on monetizing the data of their users seem to be extremely hot on AI? If you ask Signal president Meredith Whittaker (and I did), she’ll tell you it’s simply because “AI is a surveillance technology.” Onstage at TechCrunch Disrupt 2023, Whittaker explained her perspective that AI is largely inseparable from the big data and targeting industry perpetuated by the likes of Google and Meta, as well as less consumer-focused but equally prominent enterprise and defense companies. “It requires the surveillance business model; it’s an exacerbation of what we’ve seen since the late ’90s and the development of surveillance advertising. AI is a way, I think, to entrench and expand the surveillance business model,” she said.

“The Venn diagram is a circle.” “And the use of AI is also surveillant, right?” she continued. “You know, you walk past a facial recognition camera that’s instrumented with pseudo-scientific emotion recognition, and it produces data about you, right or wrong, that says ‘you are happy, you are sad, you have a bad character, you’re a liar, whatever.’ These are ultimately surveillance systems that are being marketed to those who have power over us generally: our employers, governments, border control, etc., to make determinations and predictions that will shape our access to resources and opportunities.”

176

Internet-Connected Cars Fail Privacy and Security Tests

Mozilla found brands including BMW, Ford, Toyota, Tesla, and Subaru collect data about drivers including race, facial expressions, weight, health information, and where you drive. Some of the cars tested collected data you wouldn’t expect your car to know about, including details about sexual activity, race, and immigration status, according to Mozilla. […] The worst offender was Nissan, Mozilla said. The carmaker’s privacy policy suggests the manufacturer collects information including sexual activity, health diagnosis data, and genetic data, though there’s no details about how exactly that data is gathered. Nissan reserves the right to share and sell “preferences, characteristics, psychological trends, predispositions, behavior, attitudes, intelligence, abilities, and aptitudes” to data brokers, law enforcement, and other third parties.

Other brands didn’t fare much better. Volkswagen, for example, collects your driving behaviors such as your seatbelt and braking habits and pairs that with details such as age and gender for targeted advertising. Kia’s privacy policy reserves the right to monitor your “sex life,” and Mercedes-Benz ships cars with TikTok pre-installed on the infotainment system, an app that has its own thicket of privacy problems. The privacy and security problems extend beyond the nature of the data car companies siphon off about you. Mozilla said it was unable to determine whether the brands encrypt any of the data they collect, and only Mercedes-Benz responded to the organization’s questions.

Mozilla also found that many car brands engage in “privacy washing,” or presenting consumers with information that suggests they don’t have to worry about privacy issues when the exact opposite is true. Many leading manufacturers are signatories to the Alliance for Automotive Innovation’s “Consumer Privacy Protection Principles (PDF).” According to Mozilla, these are a non-binding set of vague promises organized by the car manufacturers themselves. Questions around consent are essentially a joke as well. Subaru, for example, says that by being a passenger in the car, you are considered a “user” who has given the company consent to harvest information about you. Mozilla said a number of car brands say it’s the drivers responsibility to let passengers know about their car’s privacy policies — as if the privacy policies are comprehensible to drivers in the first place. Toyota, for example, has a constellation of 12 different privacy policies for your reading pleasure.

150

Cellebrite Asks Cops To Keep Its Phone Hacking Tech ‘Hush Hush’

For years, cops and other government authorities all over the world have been using phone hacking technology provided by Cellebrite to unlock phones and obtain the data within. And the company has been keen on keeping the use of its technology “hush hush.” As part of the deal with government agencies, Cellebrite asks users to keep its tech — and the fact that they used it — secret, TechCrunch has learned. This request concerns legal experts who argue that powerful technology like the one Cellebrite builds and sells, and how it gets used by law enforcement agencies, ought to be public and scrutinized.

In a leaked training video for law enforcement customers that was obtained by TechCrunch, a senior Cellebrite employee tells customers that “ultimately, you’ve extracted the data, it’s the data that solves the crime, how you got in, let’s try to keep that as hush hush as possible.” “We don’t really want any techniques to leak in court through disclosure practices, or you know, ultimately in testimony, when you are sitting in the stand, producing all this evidence and discussing how you got into the phone,” the employee, who we are not naming, says in the video.

154

US Spy Agencies Will Start Sharing More Cyber-Threat Intelligence with Private Companies

U.S. spy agencies will share more intelligence with U.S. companies, nongovernmental organizations and academia under a new strategy released this week that acknowledges concerns over new threats, such as another pandemic and increasing cyberattacks. The National Intelligence Strategy, which sets broad goals for the sprawling U.S. intelligence community, says that spy agencies must reach beyond the traditional walls of secrecy and partner with outside groups to detect and deter supply-chain disruptions, infectious diseases and other growing transnational threats. The intelligence community “must rethink its approach to exchanging information and insights,” the strategy says.

The U.S. government in recent years has begun sharing vast amounts of cyber-threat intelligence with U.S. companies, utilities and others who are often the main targets of foreign hackers, as well as information on foreign-influence operations with social-media companies… The emphasis on greater intelligence sharing is part of a broader trend toward declassification that the Biden administration has pursued.

164

Your School’s Next Security Guard May Be an AI-Enabled Robot

When Lori Andrews attended her daughter’s graduation at Santa Fe High School, she spotted a 5-foot-10, 400-pound robot roaming the football field alongside the newest alumni.

Andrews, a visual arts teacher at the school, said she initially thought the robot was taking photos of the graduates. She was taken aback when her husband described it as a police robot and she learned that it was providing 360-degree camera footage to the school security team.

“My reaction was, ‘Yuck,’” Andrews said. “What is it filming, what kind of camera is on it?”

The New Mexico school district started a pilot program in mid-June with the robot, which patrols the multi-building campus grounds 24 hours a day, seven days a week.

Amid growing concerns about gun violence and mass shootings in schools, several companies are starting to offer similar robots to schools across the country. Few schools have deployed the machines thus far, primarily for campus surveillance. But they have the potential to do much more, including potentially confronting attackers and others who come onto campuses without permission.

Using artificial intelligence, the robot in Santa Fe learns the school’s normal patterns of activity and detects individuals who are on campus after hours or are displaying aggressive behavior, said Andy Sanchez, who manages sales for Team 1st Technologies, the robot’s distributor in North America.

In the case of an active shooter or other threat, the robot could alert the security team, Sanchez said. It could move toward the intruder and transmit video footage that informs the officers’ course of action, he said. The robot isn’t armed but can confront intruders, and human security team members would be able to speak to the intruder through the robot’s communication system.

The school chose to disable the robot’s weapons detection features during the pilot, although the security team is determining whether it might be added at a later time, said Mario Salbidrez, executive director of safety and security at Santa Fe Public Schools. Members of the district security team and the high school are responsible for reviewing video footage when the robot sends alerts about unusual activity.

The robot doesn’t have facial recognition features, and Santa Fe High School owns the robot’s video footage, meaning it can decide whether or not to save it, Sanchez said.

The robot hasn’t yet detected intruders on campus, but it has alerted the security team to new workers entering the school’s construction site and individuals attempting to open locked doors in harmless attempts to enter buildings, Salbidrez said. Its cameras have also caught faculty members waving to the cameras and students making peace signs in passing, he added.

Callie Trader, a rising senior at Santa Fe High School, said she is unfazed by additional surveillance on campus. She said she isn’t sure students will take the robot seriously, and she doesn’t think the robot will change students’ behavior any more than existing security cameras do.

“I think it will just be funnier, just different,” she said.

Reed Meschefske, a film studies and acting and drama teacher at Santa Fe High School, said that he already feels safe at school without the new surveillance measures. But the high school is large, and the robot, which he described as a “seven camera dog,” could help cover blind spots on campus that currently go undetected, he said.

Other districts are considering robots in a security role. Robert Stokes, co-owner and president of Stokes Robotics, said his company is working with multiple districts across the country. In most cases, schools will use robots in the classroom to teach students about coding, Stokes said. But in the face of an armed intruder, the robot could take more aggressive action, pointing a laser beam at a suspect’s chest or using flashing lights to try to induce them to drop their weapons.

Humans would be responsible for deciding the robot’s course of action in real-time but could remain out of the line of fire in the case of an active shooter, Stokes said.

Brad Wade, superintendent of Wyandotte Public Schools in Oklahoma, said the district hopes to introduce four robots from Stokes Robotics in the fall. The district is primarily considering robots with video cameras that could monitor the doorways of school buildings, although the robots that can directly confront intruders aren’t out of the question, Wade added.

New technology may create the appearance of making campuses safer, said Kenneth Trump, president of the Ohio-based consulting firm National School Safety and Security Services. But schools should first focus on teaching students how to inform a trusted adult about suspicious incidents on campus, he said.

“There’s a difference between doing something that’s impactful versus doing something for the sake of doing something,” Trump said. “We need to make sure that we master kindergarten before we’re looking for Ph.D. solutions to school safety.”

Team 1st Technologies is piloting the robot at Santa Fe High School free of charge for the summer. The cost for the 2023-24 school year is estimated to be around $60,000 to $70,000, Salbidrez said. The school is still determining if the robot is worth the investment, he said.

“At this point, I don’t have anything to say no to it,” Salbidrez said. “But I don’t have enough compelling information to say yes to it either.”

201

FBI Abused Spy Law 280,000 Times In a Year

The FBI misused surveillance powers granted by Section 702 of the Foreign Intelligence Surveillance Act (FISA) over 278,000 times between 2020 and early 2021 to conduct warrantless searches on George Floyd protesters, January 6 Capitol rioters, and donors to a congressional campaign, according to a newly unclassified court opinion. The Register reports:

On Friday, the US Foreign Intelligence Surveillance Court made public a heavily redacted April 2022 opinion [PDF] that details hundreds of thousands of violations of Section 702 of the Foreign Intelligence Surveillance Act (FISA) — the legislative instrument that allows warrantless snooping. The Feds were found to have abused the spy law in a “persistent and widespread” manner, according to the court, repeatedly failing to adequately justify the need to go through US citizens’ communications using a law aimed at foreigners.

The court opinion details FBI queries run on thousands of individuals between 2020 and early 2021. This includes 133 people arrested during the George Floyd protests and more than 19,000 donors to a congressional campaign. In the latter, “the analyst who ran the query advised that the campaign was a target of foreign influence, but NSD determined that only eight identifiers used in the query had sufficient ties to foreign influence activities to comply with the querying standard,” the opinion says, referring to the Justice Department’s National Security Division (NSD). In other words, there wasn’t a strong enough foreign link to fully justify the communications search.

For the Black Lives Matter protests, the division determined that the FBI queries “were not reasonably likely to retrieve foreign intelligence information or evidence of a crime.” Again, an overreach of foreign surveillance powers. Additional “significant violations of the querying standard” occurred in searched related to the January 6, 2021 breach of the US Capitol, domestic drug and gang investigations, and domestic terrorism probes, according to the court. It’s said that more than 23,000 queries were run on people suspected of storming the Capitol.

203

Supreme Court Declines To Hear Challenge To Warrantless Pole Camera Surveillance

The U.S. Supreme Court [Monday] declined to hear Moore v. United States, leaving in place a patchwork of lower court decisions on an important and recurring question about privacy rights in the face of advancing surveillance technology. In this case, police secretly attached a small camera to a utility pole, using it to surveil a Massachusetts home 24/7 for eight months — all without a warrant. Law enforcement could watch the camera’s feed in real time, and remotely pan, tilt, and zoom close enough to read license plates and see faces. They could also review a searchable, digitized record of this footage at their convenience. The camera captured every coming and going of the home’s residents and their guests over eight months. As a result, the government targeted the home of a community pillar — a lawyer, respected judicial clerk, devoted church member, and a grandmother raising her grandkids — to cherry-pick images from months of unceasing surveillance in an effort to support unwarranted criminal charges against an innocent person.

Federal courts of appeals and state supreme courts have divided on the question of whether such sweeping surveillance is a Fourth Amendment search requiring a warrant. The highest courts of Massachusetts, Colorado, and South Dakota have held that long-term pole camera surveillance of someone’s home requires a warrant. In Moore v. United States, the members of the full en banc U.S. Court of Appeals for the First Circuit split evenly on the question, with three judges explaining that a warrant is required, and three judges expressing the belief that the Fourth Amendment imposes no limit on this invasive surveillance. This issue will continue to arise in the lower courts; the ACLU filed an amicus brief on the question in the U.S. Court of Appeals for the Tenth Circuit earlier this month.

222

Researchers Are Getting Eerily Good at Using WiFi to ‘See’ People Through Walls in Detail

Researchers at Carnegie Mellon University developed a method for detecting the three dimensional shape and movements of human bodies in a room, using only WiFi routers. From a report:
To do this, they used DensePose, a system for mapping all of the pixels on the surface of a human body in a photo. DensePose was developed by London-based researchers and Facebook’s AI researchers. From there, according to their recently-uploaded preprint paper published on arXiv, they developed a deep neural network that maps WiFi signals’ phase and amplitude sent and received by routers to coordinates on human bodies. Researchers have been working on “seeing” people without using cameras or expensive LiDAR hardware for years. In 2013, a team of researchers at MIT found a way to use cell phone signals to see through walls; in 2018, another MIT team used WiFi to detect people in another room and translate their movements to walking stick-figures.

195

Mysterious Company With Government Ties Plays Key Internet Role

An offshore company that is trusted by the major web browsers and other tech companies to vouch for the legitimacy of websites has connections to contractors for U.S. intelligence agencies and law enforcement, according to security researchers, documents and interviews. Google’s Chrome, Apple’s Safari, nonprofit Firefox and others allow the company, TrustCor Systems, to act as what’s known as a root certificate authority, a powerful spot in the internet’s infrastructure that guarantees websites are not fake, guiding users to them seamlessly.

The company’s Panamanian registration records show that it has the identical slate of officers, agents and partners as a spyware maker identified this year as an affiliate of Arizona-based Packet Forensics, which public contracting records and company documents show has sold communication interception services to U.S. government agencies for more than a decade. One of those TrustCor partners has the same name as a holding company managed by Raymond Saulino, who was quoted in a 2010 Wired article as a spokesman for Packet Forensics. Saulino also surfaced in 2021 as a contact for another company, Global Resource Systems, that caused speculation in the tech world when it briefly activated and ran more than 100 million previously dormant IP addresses assigned decades earlier to the Pentagon. The Pentagon reclaimed the digital territory months later, and it remains unclear what the brief transfer was about, but researchers said the activation of those IP addresses could have given the military access to a huge amount of internet traffic without revealing that the government was receiving it.

263

Google is Quietly Working on a Wearable Device for Preteens

Google is developing a wearable device for preteens under its Fitbit group as it attempts to capture a growing demographic of younger users who own wearable tech, three employees familiar with the project told Insider.

Internally code-named “Project Eleven,” the wearable is designed to help older kids form healthy relationships with their phones and social media, two of the employees said. One of them said the device could include safety features that would let parents contact their children and know their whereabouts.

Project Eleven may be an opportunity to capture a growing market of younger users who would otherwise grow up to become Apple loyalists.

248

New Mac App Wants To Record Everything You Do – So You Can ‘Rewind’ It Later

Yesterday, a company called Rewind AI announced a self-titled software product for Macs with Apple Silicon that reportedly keeps a highly compressed, searchable record of everything you do locally on your Mac and lets you “rewind” time to see it later. If you forget something you’ve “seen, said, or heard,” Rewind wants to help you find it easily. Rewind AI claims its product stores all recording data locally on your machine and does not require cloud integration. Among its promises, Rewind will reportedly let you rewind Zoom meetings and pull information from them in a searchable form. In a video demo on Rewind.AI’s site, the app opens when a user presses Command+Shift+Space. The search bar suggests typing “anything you’ve seen, said, or heard.” It also shows a timeline at the bottom of the screen that represents previous actions in apps.

After searching for “tps reports,” the video depicts a grid view of every time Rewind has encountered the phrase “tps reports” as audio or text in any app, including Zoom chats, text messages, emails, Slack conversations, and Word documents. It describes filtering the results by app — and the ability to copy and paste from these past instances if necessary. Founded by Dan Siroker and Brett Bejcek, Rewind AI is composed of a small remote team located in various cities around the US. Portions of the company previously created Scribe, a precursor to Rewind that received some press attention in 2021. In an introductory blog post, Rewind AI co-founder Dan Siroker writes, “What if we could use technology to augment our memory the same way a hearing aid can augment our hearing?”

259