Archives August 2021

Investigation Reveals Widespread Cellphone Surveillance of the Innocent

Cellphones “can be transformed into surveillance devices,” writes the Guardian, reporting startling new details about which innocent people are still being surveilled (as part of a collaborative reporting project with 16 other media outlets led by the French nonprofit Forbidden Stories).

Long-time Slashdot reader shanen shared the newspaper’s critique of a “privatised government surveillance industry” that’s made NSO a billion-dollar company, thanks to its phone-penetrating spy software Pegaus:
[NSO] insists only carefully vetted government intelligence and law enforcement agencies can use Pegasus, and only to penetrate the phones of “legitimate criminal or terror group targets”. Yet in the coming days the Guardian will be revealing the identities of many innocent people who have been identified as candidates for possible surveillance by NSO clients in a massive leak of data… The presence of their names on this list indicates the lengths to which governments may go to spy on critics, rivals and opponents.

First we reveal how journalists across the world were selected as potential targets by these clients prior to a possible hack using NSO surveillance tools. Over the coming week we will be revealing the identities of more people whose phone numbers appear in the leak. They include lawyers, human rights defenders, religious figures, academics, businesspeople, diplomats, senior government officials and heads of state. Our reporting is rooted in the public interest. We believe the public should know that NSO’s technology is being abused by the governments who license and operate its spyware.

But we also believe it is in the public interest to reveal how governments look to spy on their citizens and how seemingly benign processes such as HLR lookups [which track the general locations of cellphone users] can be exploited in this environment.

It is not possible to know without forensic analysis whether the phone of someone whose number appears in the data was actually targeted by a government or whether it was successfully hacked with NSO’s spyware. But when our technical partner, Amnesty International’s Security Lab, conducted forensic analysis on dozens of iPhones that belonged to potential targets at the time they were selected, they found evidence of Pegasus activity in more than half.

The investigators say that potential targets included nearly 200 journalists around the world, including numerous reporters from CNN, the Associated Press, Voice of America, the New York Times, the Wall Street Journal, Bloomberg News, Le Monde in France, and even the editor of the Financial Times.

In addition, the investigators say they found evidence the Pegasus software had been installed on the phone of the fiancée of murdered Saudi journalist Jamal Khashoggi. NSO denies this to the Washington Post. But they also insist that they’re simply licensing their software to clients, and their company “has no insight” into those clients’ specific intelligence activities.

The Washington Post reports that Amnesty’s Security Lab found evidence of Pegasus attacks on 37 of 67 smartphones from the list which they tested. But beyond that “for the remaining 30, the tests were inconclusive, in several cases because the phones had been replaced. Fifteen of the phones were Android devices, none of which showed evidence of successful infection. However, unlike iPhones, Androids do not log the kinds of information required for Amnesty’s detective work.”

Familiar privacy measures like strong passwords and encryption offer little help against Pegasus, which can attack phones without any warning to users. It can read anything on a device that a user can, while also stealing photos, recordings, location records, communications, passwords, call logs and social media posts. Spyware also can activate cameras and microphones for real-time surveillance.

Edward Snowden Calls For Spyware Trade Ban Amid Pegasus Revelations

Snowden, who in 2013 blew the whistle on the secret mass surveillance programs of the US National Security Agency, described for-profit malware developers as “an industry that should not exist.” He made the comments in an interview with the Guardian after the first revelations from the Pegasus project, a journalistic investigation by a consortium of international media organizations into the NSO Group and its clients. […] Snowden said the consortium’s findings illustrated how commercial malware had made it possible for repressive regimes to place vastly more people under the most invasive types of surveillance. For traditional police operations to plant bugs or wiretap a suspect’s phone, law enforcement would need to “break into somebody’s house, or go to their car, or go to their office, and we’d like to think they’ll probably get a warrant,” he said. But commercial spyware made it cost-efficient for targeted surveillance against vastly more people. “If they can do the same thing from a distance, with little cost and no risk, they begin to do it all the time, against everyone who’s even marginally of interest,” he said. “If you don’t do anything to stop the sale of this technology, it’s not just going to be 50,000 targets. It’s going to be 50 million targets, and it’s going to happen much more quickly than any of us expect.”

Part of the problem arose from the fact that different people’s mobile phones were functionally identical to one another, he said. “When we’re talking about something like an iPhone, they’re all running the same software around the world. So if they find a way to hack one iPhone, they’ve found a way to hack all of them.” He compared companies commercializing vulnerabilities in widely used mobile phone models to an industry of “infectioneers” deliberately trying to develop new strains of disease. “It’s like an industry where the only thing they did was create custom variants of Covid to dodge vaccines,” he said. “Their only products are infection vectors. They’re not security products. They’re not providing any kind of protection, any kind of prophylactic. They don’t make vaccines — the only thing they sell is the virus.”

Snowden said commercial malware such as Pegasus was so powerful that ordinary people could in effect do nothing to stop it. Asked how people could protect themselves, he said: “What can people do to protect themselves from nuclear weapons? “There are certain industries, certain sectors, from which there is no protection, and that’s why we try to limit the proliferation of these technologies. We don’t allow a commercial market in nuclear weapons.” He said the only viable solution to the threat of commercial malware was an international moratorium on its sale. “What the Pegasus project reveals is the NSO Group is really representative of a new malware market, where this is a for-profit business,” he said. “The only reason NSO is doing this is not to save the world, it’s to make money.” He said a global ban on the trade in infection vectors would prevent commercial abuse of vulnerabilities in mobile phones, while still allowing researchers to identify and fix them. “The solution here for ordinary people is to work collectively. This is not a problem that we want to try and solve individually, because it’s you versus a billion dollar company,” he said. “If you want to protect yourself you have to change the game, and the way we do that is by ending this trade.”

Facebook is Ditching Plans To Make an Interface That Reads the Brain

The spring of 2017 may be remembered as the coming-out party for Big Tech’s campaign to get inside your head. That was when news broke of Elon Musk’s new brain-interface company, Neuralink, which is working on how to stitch thousands of electrodes into people’s brains. Days later, Facebook joined the quest when it announced that its secretive skunkworks, named Building 8, was attempting to build a headset or headband that would allow people to send text messages by thinking — tapping them out at 100 words per minute. The company’s goal was a hands-free interface anyone could use in virtual reality. “What if you could type directly from your brain?” asked Regina Dugan, a former DARPA officer who was then head of the Building 8 hardware dvision. “It sounds impossible, but it’s closer than you realize.”

Now the answer is in — and it’s not close at all. Four years after announcing a “crazy amazing” project to build a “silent speech” interface using optical technology to read thoughts, Facebook is shelving the project, saying consumer brain-reading still remains very far off. In a blog post, Facebook said it is discontinuing the project and will instead focus on an experimental wrist controller for virtual reality that reads muscle signals in the arm. “While we still believe in the long-term potential of head-mounted optical [brain-computer interface] technologies, we’ve decided to focus our immediate efforts on a different neural interface approach that has a nearer-term path to market,” the company said. Facebook’s brain-typing project had led it into uncharted territory — including funding brain surgeries at a California hospital and building prototype helmets that could shoot light through the skull — and into tough debates around whether tech companies should access private brain information. Ultimately, though, the company appears to have decided the research simply won’t lead to a product soon enough.

Man Wrongfully Arrested By Facial Recognition Tells Congress His Story

Michigan resident Robert Williams testified about being wrongfully arrested by Detroit Police in an effort to urge Congress to pass legislation against the use of facial recognition technology. Williams’ testimony was part of a hearing held by the House of Representatives’ subcommittee on crime, terrorism, and homeland security, which dealt with how law enforcement uses the highly controversial surveillance technology. Congress recently introduced the Facial Recognition and Biometric Technology Moratorium, which would indefinitely ban its use by law enforcement. Williams was wrongfully arrested in 2020 for federal larceny after he was misidentified by the Detroit Police Department’s facial recognition software after they used a grainy image from the surveillance footage. He was then picked from a photo lineup by the store security guard who wasn’t actually present for the incident. According to his testimony, Williams was detained for thirty hours and was not given any food or water. […]

Research has repeatedly shown that facial recognition technology is fundamentally biased against women and people of color, leading to errors like this. Even when working properly, privacy advocates have argued facial recognition systems disproportionately target communities of color, creating further pretext for police intervention. […] “Large scale adoption of this technology would inject further inequity into a system at a time when we should be moving to make the criminal justice system more equitable,” Representative Sheila Jackson Lee (TX-18) said during the hearing. The subcommittee also referenced a recent study from the U.S. Government Accountability Office that reported that 20 federal agencies used facial recognition software last year. Six federal agencies, including the FBI and the U.S. Postal Service, reported using it during the 2020 Black Lives Matter protests that followed the police murder of George Floyd.

Robert Williams is just one of many people impacted by this technology’s errors and biases. Williams is now represented by the ACLU and is suing the Detroit Police Department for damages and policy changes to prohibit the use of facial recognition technology. So far, the technology has been banned statewide in Vermont and Virginia, as well as in 20 cities across the US. “Mr. Williams deserved better from the law enforcement agencies entrusted to use a technology that we all know is less accurate when applied to citizens who look like him,” House Judiciary Committee Chairman Jerrold Nadler (D-NY) said in his testimony.

Handwriting Is Better Than Typing When Learning a New Language, Study Finds

Researchers tasked 42 adult volunteers with learning the Arabic alphabet from scratch: some through writing it out on paper, some through typing it out on a keyboard, and some through watching and responding to video instructions. Those in the handwriting group not only learned the unfamiliar letters more quickly, but they were also better able to apply their new knowledge in other areas — by using the letters to make new words and to recognize words they hadn’t seen before, for example. While writing, typing, and visual learning were effective at teaching participants to recognize Arabic letters — learners made very few mistakes after six exercise sessions — on average, the writing group needed fewer sessions to get to a good standard.

Researchers then tested the groups to see how the learning could be generalized. In every follow-up test, using skills they hadn’t been trained on, the writing group performed the best: naming letters, writing letters, spelling words, and reading words. The research shows that the benefits of teaching through handwriting go beyond better penmanship: There are also advantages in other areas of language learning. It seems as though the knowledge gets more firmly embedded through writing.

Amazon Wants To Monitor You In Your Sleep, For Your Benefit

The Federal Communications Commission on Friday granted Amazon approval to use a radar sensor to sense motion and “enable contactless sleep tracing functionalities.” Amazon on June 22 asked the FCC, which regulates airwave uses, for permission to market a device that uses radar. The technology captures movement in three dimensions, enabling a user to control its features through simple gestures and movements, the company said in a filing. The capability, according to Amazon, could help people with “with mobility, speech, or tactile impairments,” and it could monitor sleep with a high degree of precision.

“The use of Radar Sensors in sleep tracking could improve awareness and management of sleep hygiene, which in turn could produce significant health benefits for many Americans,” Amazon said in its filing. “Radar Sensors will allow consumers to recognize potential sleep issues.” Amazon didn’t immediately respond to a request for comment. The company didn’t fully describe the device in its filing, but did say it would not be a mobile device. The FCC earlier granted similar permission to Google for radar to enable touchless control of Pixel smartphones, the agency said in its letter approving Amazon’s request.

Uber and Lyft Can’t Find Drivers Because Gig Work Sucks

You may have noticed recently that an Uber ride is more expensive than it used to be. As ride-hail companies Uber and Lyft hike prices to record heights during the COVID-19 pandemic, much commentary has settled on explaining this as a consequence of a “labor shortage” largely motivated by a lack of proper financial incentives. Drivers, the story goes, saw the new cash bonuses offered by companies to lure workers back as insufficient. Some, perhaps, decided they were not worth the risk of getting infected with COVID-19 or one of its budding variants, while other analyses suggested drivers were content with living on stimulus funds rather than money from driving. At the same time, the firms began curtailing subsidies that kept prices low enough to attract riders and work towards monopoly. Together, this has left us with a sudden and massive spike in ride-hail prices; Gridwise, a ride-hail driver assistance app, estimated that Uber has increased its prices by 79 percent since the second quarter of 2019.

While Uber and Lyft are reportedly thinking about offering new perks such as education, career, and expense programs, analysts admit these don’t strike at core problems with the gig economy that were driving workers away before COVID-19 hit and are making it difficult to attract them now. In conversations with Motherboard, former and current ride-hail drivers pointed to a major factor for not returning: how horrible it is to work for Uber and Lyft. For some workers, this realization came long before the pandemic reared its head, and for others, the crisis hammered it home. Motherboard has changed some drivers’ names or granted them anonymity out of their fear of retaliation.
“If I kept driving, something was going to break,” said Maurice, a former driver in New York who spent four years working for Uber and Lyft before the pandemic. “I already go nights without eating or sleeping. My back hurt, my joints hurt, my neck hurt, I felt like a donkey. Like a slave driving all the time.”

“I’ve been driving for six years. Uber has taken at least 10,000 pounds in commission from me each year! They take 20 percent of my earnings, then offer me 200 pounds,” Ramana Prai, a London-based Uber driver, told Motherboard. “I don’t understand how they can take 60,000 pounds from me, then offer nothing when I’m in need. How can I provide for my partner and two kids with this? My employer has let me down.”

“I woke up every day asking how long I could keep it up, I just didn’t feel like a person,” Yona, who worked for Lyft in California for the past six years until the pandemic, told Motherboard. “I got two kids, my mother, my sister, I couldn’t see them. And I was doing all this for them but I could barely support them, barely supported myself.”

“I was making even less than my sister and I was probably less safe too,” Yona’s sister, Destiny, told Motherboard. “She got out back in the spring, I hopped on and was coming back negative some days. I tried UberEats and DoorDash to see if that was any better, but stopped after a friend was almost robbed on a delivery. Okay, so the options are get covid or get robbed, then guess what: I’m doing none of them.”

Motherboard argues that the degrading working conditions, as well as the poor pay, “are structurally necessary for ride-hail companies. They were necessary to attract and retain customers with artificially low prices, to burn through drivers at high rates that frustrate labor organizing, and bolster the narrative of gig work as temporary, transient, and convenient. It’s no wonder, then, that drivers aren’t coming back.”

Chinese Facial Recognition Company With Silicon Valley Funding

DeepGlint, also known as Beijing Geling Shentong Information Technology Co., Ltd., is a facial recognition company with deep ties to Chinese police surveillance, and funding from US-based Sequoia Capital. Today the Commerce Department added it to its Entity List, which restricts US companies from doing business with listed firms without a special license. Sequoia did not immediately respond to a request for comment. DeepGlint co-founded a facial recognition lab in 2018 with Chinese authorities in Urumqi, the capital of Xinjiang, according to the South China Morning Post. It has also gained international bragging rights through the US National Institute of Standards and Technology’s (NIST) Face Recognition Vendor Test. DeepGlint claimed top accuracy in the test as of January 2021, giving it a potent marketing tool in the security and surveillance industry. While DeepGlint has been accepted for a public offering on Shanghai’s STAR stock exchange, the firm hasn’t seen the commercial success of other AI startups in the country, explained Jeffrey Ding in his ChinAI newsletter last month. Since the firm is so heavily invested in government work, it has to follow slow government procurement cycles and is unlikely to score huge infrastructure projects, Ding writes.

Volkswagen, BMW Fined $1 Billion For Colluding To Make Dirtier Cars

Volkswagen, Audi, Porsche, BMW, and Mercedes-Benz parent company Daimler spent years illegally colluding to slow the deployment of cleaner emissions technology, says the European Union, which is dishing out fines as a result. From a report:
The EU’s executive branch hit the Volkswagen Group (which owns Audi and Porsche) and BMW with a collective $1 billion fine on Thursday for their role in the scheme. Volkswagen Group must pay $595 million, while BMW will pay $442 million. Daimler, however, evaded a $861 million fine of its own because the automaker revealed the collusion to the regulators.

The scheme described by EU authorities is separate from the Volkswagen Group’s massive Dieselgate scandal, in which the company installed software on its diesel vehicles that helped fool environmental regulators into believing they were compliant, when in reality, they were polluting far more than the legal limit. Dieselgate ultimately led to nearly $40 billion in fines, buybacks, and legal fees for the Volkswagen Group. Daimler also installed software on some of its diesel vehicles to cheat emissions tests and has paid billions of dollars in fines. BMW was careful to point out Thursday that, unlike the other companies it was caught colluding with, it had not cheated emissions testing.

Despite the Hype, iPhone Security No Match For NSO Spyware

The text delivered last month to the iPhone 11 of Claude Mangin, the French wife of a political activist jailed in Morocco, made no sound. It produced no image. It offered no warning of any kind as an iMessage from somebody she didn’t know delivered malware directly onto her phone — and past Apple’s security systems. Once inside, the spyware, produced by Israel’s NSO Group and licensed to one of its government clients, went to work, according to a forensic examination of her device by Amnesty International’s Security Lab. It found that between October and June, her phone was hacked multiple times with Pegasus, NSO’s signature surveillance tool, during a time when she was in France. The examination was unable to reveal what was collected. But the potential was vast: Pegasus can collect emails, call records, social media posts, user passwords, contact lists, pictures, videos, sound recordings and browsing histories, according to security researchers and NSO marketing materials.

The spyware can activate cameras or microphones to capture fresh images and recordings. It can listen to calls and voice mails. It can collect location logs of where a user has been and also determine where that user is now, along with data indicating whether the person is stationary or, if moving, in which direction. And all of this can happen without a user even touching her phone or knowing she has received a mysterious message from an unfamiliar person — in Mangin’s case, a Gmail user going by the name “linakeller2203.” These kinds of “zero-click” attacks, as they are called within the surveillance industry, can work on even the newest generations of iPhones, after years of effort in which Apple attempted to close the door against unauthorized surveillance — and built marketing campaigns on assertions that it offers better privacy and security than rivals.

[…] Researchers have documented iPhone infections with Pegasus dozens of times in recent years, challenging Apple’s reputation for superior security when compared with its leading rivals, which run Android operating systems by Google. The months-long investigation by The Post and its partners found more evidence to fuel that debate. Amnesty’s Security Lab examined 67 smartphones whose numbers were on the Forbidden Stories list and found forensic evidence of Pegasus infections or attempts at infections in 37. Of those, 34 were iPhones — 23 that showed signs of a successful Pegasus infection and 11 that showed signs of attempted infection.

SA govt trials home quarantine app with facial recognition, GPS tracking

The South Australian government is set to trial a smartphone app that uses facial recognition and geolocation to confirm a person’s location during home quarantine.

The government will trial the app known as Home Quarantine SA with 50 residents who are required to undertake 14 days of quarantine on return from interstate locations this week.

The app will allow trial participants to “confirm their location with regular check-ins” through a live face check-in feature that uses facial recognition and location verification.

It gives individuals 15 minutes to respond to the random location verification requests at their registered location.

US Prisons Mull AI To Analyze Inmate Phone Calls

A key House of Representatives panel has pressed for a report to study the use of artificial intelligence to analyze prisoners’ phone calls. “But prisoners’ advocates and inmates’ families say relying on AI to interpret communications opens up the system to mistakes, misunderstandings and racial bias,” reports Reuters. From the report:
The call for the Department of Justice (DOJ) to further explore the technology, to help prevent violent crime and suicide, accompanies an $81 billion-plus spending bill to fund the DOJ and other federal agencies in 2022 that the Appropriations Committee passed last month. The technology can automatically transcribe inmates’ phone calls, analyzing their patterns of communication and flagging certain words or phrases, including slang, that officials pre-program into the system. A House Democratic aide said in an emailed statement they were encouraging the DOJ “to engage with stakeholders in the course of examining the feasibility of utilizing such a system.”

Several state and local facilities across the country have already started using the tech, including in Alabama, Georgia and New York. The House panel wants the DOJ to look into potentially leveraging the technology for federal use and to identify gaps or shortcomings in the information it produces. Privacy groups say the technology could amplify racial bias in the justice system and unfairly subject prisoners to unaccountable artificial intelligence. Proponents dispute such criticisms, saying the tech is a vital time-saving tool for law enforcement and does not target specific groups.

Uber Asked Contractor To Allow Video Surveillance In Employee Homes, Bedrooms

Teleperformance, one of the world’s largest call center companies, is reportedly requiring some employees to consent to video monitoring in their homes. Employees in Colombia told NBC News that their new contract granted the company the right to use AI-powered cameras to observe and record their workspaces. The contract also requires employees to share biometric data like fingerprints and photos of themselves, and workers have to agree to share data and images that may include children under 18.

Teleperformance employs over 380,000 people in 83 countries to provide call center services for a range of companies, including Amazon, Apple, and Uber. A company spokesperson told NBC that it is “constantly looking for ways to enhance the Teleperformance Colombia experience for both our employees and our customers, with privacy and respect as key factors in everything we do.” Amazon and Apple said that they did not ask Teleperformance for this extra monitoring, and an Apple spokesperson said the company forbids video monitoring of employees by suppliers. A recent Apple audit reportedly found Teleperformance in compliance with this requirement. But Uber apparently requested the ability to monitor some workers. Uber said it wouldn’t observe the entire workforce, but the company did not specify which employees would be subject to the new policies. The ride sharing company asked for the monitoring of Teleperformance’s remote employees because call center staff have access to customers credit cards and trip details, an Uber spokesperson told NBC News.

Earth is Warming Faster Than Previously Thought, and the Window is Closing To Avoid Catastrophic Outcomes

As the world battles historic droughts, landscape-altering wildfires and deadly floods, a landmark report from global scientists says the window is rapidly closing to cut our reliance on fossil fuels and avoid catastrophic changes that would transform life as we know it. The state-of-the-science report from the United Nations’ Intergovernmental Panel on Climate Change says the world has rapidly warmed 1.1 degrees Celsius higher than pre-industrial levels, and is now careening toward 1.5 degrees — a critical threshold that world leaders agreed warming should remain below to avoid worsening impacts.

Only by making deep cuts to greenhouse gas emissions, while also removing carbon dioxide from the atmosphere, can we halt the precipitous trend. “Bottom line is that we have zero years left to avoid dangerous climate change, because it’s here,” Michael E. Mann, a lead author of the IPCC’s 2001 report, told CNN. Unlike previous assessments, Monday’s report concludes it is “unequivocal” that humans have caused the climate crisis and confirms that “widespread and rapid changes” have already occurred, some of them irreversibly.

That is due in part to the breakneck pace at which the planet has been recently warming, faster than scientists have previously observed. Since 2018, when the panel published a special report on the significance of 1.5-degrees, greenhouse gas emissions have continued mostly unabated and have pushed global temperatures higher. Even under the IPCC’s most optimistic scenario, in which the world’s emissions begin to drop sharply today and are reduced to net zero by 2050, global temperature will still peak above the 1.5-degree threshold before falling. In a statement, UN Secretary-General Antanio Guterres called the report “a code red for humanity,” and noted the 1.5-degree threshold is “perilously close.” “The only way to prevent exceeding this threshold is by urgently stepping up our efforts, and pursuing the most ambitious path,” Guterres said.

Leaked Document Says Google Fired Dozens of Employees for Data Misuse

Google has fired dozens of employees between 2018 and 2020 for abusing their access to the company’s tools or data, with some workers potentially facing allegations of accessing Google user or employee data, according to an internal Google document obtained by Motherboard. From a report:
The document provides concrete figures on an often delicate part of a tech giant’s operations: investigations into how company’s own employees leverage their position inside the company to steal, leak, or abuse data they may have access to. Insider abuse is a problem across the tech industry. Motherboard previously uncovered instances at Facebook, Snapchat, and MySpace, with employees in some cases using their access to stalk or otherwise spy on users.

The document says that Google terminated 36 employees in 2020 for security related issues. Eighty-six percent of all security-related allegations against employees included mishandling of confidential information, such as the transfer of internal-only information to outside parties. 10 percent of all allegations in 2020 concerned misuse of systems, which can include accessing user or employee data in violation of Google’s own policies, helping others to access that data, or modifying or deleting user or employee data, according to the document. In 2019, that figure was 13 percent of all security allegations.

Facebook Used Facial Recognition Without Consent 200K Times, Says Watchdog

Another [$22,000] penalty was issued for illegally collecting social security numbers, not issuing notifications regarding personal information management changes, and other missteps. Facebook has been ordered to destroy facial information collected without consent or obtain consent, and was prohibited from processing identity numbers without legal basis. It was also ordered to destroy collected data and disclose contents related to foreign migration of personal information. Zuck’s brainchild was then told to make it easier for users to check legal notices regarding personal information. The fine is the second-largest ever issued by the organization, the largest ever also going to Facebook. In November 2020 the Social Network was fined [$5.7 million] for passing on personal data to other operators without user permission.

Netflix’s fine was a paltry [$188,000], with that sum imposed for collecting data from five million people without their consent, plus another [$2,700] for not disclosing international transfer of the data. Google got off the easiest, with just a “recommendation” to improve its personal data handling processes and make legal notices more precise. The PPIC said it is not done investigating methods of collecting personal information from overseas businesses and will continue with a legal review.

Researchers Create ‘Master Faces’ To Bypass Facial Recognition

Researchers have demonstrated a method to create “master faces,” computer generated faces that act like master keys for facial recognition systems, and can impersonate several identities with what the researchers claim is a high probability of success. In their paper (PDF), researchers at the Blavatnik School of Computer Science and the School of Electrical Engineering in Tel Aviv detail how they successfully created nine “master key” faces that are able to impersonate almost half the faces in a dataset of three leading face recognition systems. The researchers say their results show these master faces can successfully impersonate over 40 percent of the population in these systems without any additional information or data of the person they are identifying.

The researchers tested their methods against three deep face recognition systems — Dlib, FaceNet, and SphereFace. Lead author Ron Shmelkin told Motherboard that they used these systems because they are capable of recognizing “high-level semantic features” of the faces that are more sophisticated than just skin color or lighting effects. The researchers used a StyleGAN to generate the faces and then used an evolutionary algorithm and neural network to optimize and predict their success. The evolutionary strategy then creates iterations, or generations, of candidates of varying success rates. The researchers then used the algorithm to train a neural network, to classify the best candidates as the most promising ones. This is what teaches it to predict candidates’ success and, in turn, direct the algorithm to generate better candidates with a higher probability of passing. The researchers even predict that their master faces could be animated using deepfake technology to bypass liveness detection, which is used to determine whether a biometric sample is real or fake.

Homeless Encampment Grows On Apple Property In Silicon Valley

A large homeless encampment is growing on the site Apple earmarked for its North San Jose campus, two years after Apple made waves with a $2.5 billion pledge to combat the Bay Area’s affordable housing and homelessness crisis. What started as a few RVs parked on the side of Component Drive has grown over the past year into a sprawling camp of dozens of people, a maze of broken-down vehicles and a massive amount of trash scattered across the vacant, Apple-owned property. People with nowhere else to go live there in tents, RVs and wooden structures they built themselves. At least two children call the camp home.

Apple is trying to figure out what to do, but it’s a tough situation. Clearing the camp likely will be difficult both logistically — it’s more challenging to remove structures and vehicles that don’t run than tents — and ethically — there are few places for the displaced residents to go. Apple is “in talks with the city on a solution,” company spokeswoman Chloe Sanchez Sweet wrote in an email, without providing additional details.

The vacant land off Component Drive figured into Apple’s $2.5 billion commitment. Apple originally bought the land in a push to acquire real estate in North San Jose for a new tech campus, but so far, the company hasn’t done much to develop it. In 2019, the tech company promised to make $300 million of land it owns in San Jose available for new affordable housing — including a portion of the Component Drive property. But it’s unclear when anything might be built.

Police In At Least 24 Countries Have Used Clearview AI. Find Out Which Ones Here.

Law enforcement agencies and government organizations from 24 countries outside the United States used a controversial facial recognition technology called Clearview AI, according to internal company data reviewed by BuzzFeed News.

That data, which runs up until February 2020, shows that police departments, prosecutors’ offices, universities, and interior ministries from around the world ran nearly 14,000 searches with Clearview AI’s software. At many law enforcement agencies from Canada to Finland, officers used the software without their higher-ups’ knowledge or permission. After receiving questions from BuzzFeed News, some organizations admitted that the technology had been used without leadership oversight.

Google Says Staff Have No Right to Protest Its Choice of Clients

Google employees have no legal right to protest the company’s choice of clients, the internet giant told a judge weighing the U.S. government’s allegations that its firings of activists violated the National Labor Relations Act.

“Even if Google had, for the sake of argument, terminated the employees for their protest activities — for protesting their choice of customers — this would not violate the Act,” Google’s attorney Al Latham said in his opening statement Tuesday at a labor board trial.

National Labor Relations Board prosecutors have accused the Alphabet Inc. unit of violating federal law by illegally firing five employees for their activism. Three of those workers’ claims had originally been dismissed under President Donald Trump, because agency prosecutors concluded that their opposition to the company collaborating with immigration enforcement wasn’t legally protected, according to their lawyer. But that decision was reversed after President Joe Biden fired and replaced the labor board’s general counsel.

Google has been roiled over the past four years by a wave of activism by employees challenging management over issues including treatment of sub-contracted staff, handling of sexual harassment, and a contract with the U.S. Customs and Border Protection agency, which some of the fired workers accessed internal information about and circulated a petition against.

Google has denied wrongdoing, saying in a Monday statement that it encourages “open discussion and debate” but terminated staff in response to violations of its data security policies. “Google terminated these employees not because of their protest as such, but because in the pursuit of their protest, they accessed highly confidential information that they had no right to access,” its attorney told the judge Tuesday.