Academics warn of facial recognition technology’s ‘chilling effects’ and potential social harm

Concerns about the potential use of facial recognition technology (FRT) by An Garda Síochána have been raised by a number of specialist academics who warned of its “chilling effects” and potential social harms.

An Oireachtas committee hearing expert insights on the technology was also told of likely legal challenges should its use be adopted in the context of current draft legislation, and of the importance of judicial, rather than police, oversight.

“If Ireland goes ahead with this technology it’s just a matter of time before we become a headline,” said Dr Abeba Birhane of the School of Computer Science and Statistics at Trinity College Dublin, who warned of its tendency to err along minority group lines, a common criticism of FRT systems.

Dr Daragh Murray, senior lecturer at the School of Law in Queen Mary University of London, sought to highlight the potential “chilling effects”.

“Facial recognition represents a step-change in police surveillance capability,” he said, telling the committee he advocated for a moratorium on its use.

“Both live and retrospective facial recognition makes possible the ability to monitor, track and profile large amounts of the population with significant private life implications.”

The extent of proposed FRT use by An Garda Síochána was played down by its chief information officer Andrew O’Sullivan who told the committee it would be retrospective, and with human involvement, rather than autonomous, machine-led results in real time.

“We have absolutely no intention of doing real time facial recognition; we also have no time at all for autonomous machine decision making,” he said. “Every decision [during investigations] will be made by a person.”

33

UK Government Seeks Expanded Use of AI-based Facial Recognition By Police

In a document released on Wednesday, the government outlined its ambitions to potentially deploy new biometric systems nationally over the next 12 to 18 months. The move comes after privacy campaigners and independent academics criticised the technology for being inaccurate and biased, particularly against darker-skinned people.

MPs have previously called for a moratorium on its use on the general population until clear laws are established by parliament. The government is calling for submissions from companies for technologies that “can resolve identity using facial features and landmarks,” including for live facial recognition which involves screening the general public for specific individuals on police watch lists.

In particular, the Home Office is highlighting its interest in novel artificial intelligence technologies that could process facial data efficiently to identify individuals, and software that could be integrated with existing technologies deployed by the department and with CCTV cameras. Facial recognition software has been used by South Wales Police and London’s Metropolitan Police over the past five years across multiple trials in public spaces including shopping centres, during events such as the Notting Hill Carnival and, more recently, during the coronation.

65

A Face Recognition Site Crawled the Web for Dead People’s Photos

PimEyes appears to have scraped a major ancestry website for pics, without permission. Experts fear the images could be used to identify living relatives.

165

Surveillance Tech Didn’t Stop the Uvalde Massacre

The Uvalde Consolidated Independent School District, of which Robb is a member, followed this conventional wisdom and embraced modern security solutions at its schools. Indeed, the district had actually doubled its security budget over the past several years to invest in a variety of recommended precautions.

According to UCISD’s security page, the district employed a safety management system from security vendor Raptor Technologies, designed to monitor school visitors and screen for dangerous individuals. It also used a social media monitoring solution, Social Sentinel, that sifted through children’s online lives to scan for signs of violent or suicidal ideation. Students could download an anti-bullying app (the STOP!T app) to report abusive peers, and an online portal at ucisd.net allowed parents and community members to submit reports of troubling behavior to administrators for further investigation. As has been noted, UCISD also had its own police force, developed significant ties to the local police department, and had an emergency response plan. It even deployed “Threat Assessment Teams” that were scheduled to meet regularly to “identify, evaluate, classify and address threats or potential threats to school security.”

And yet, none of the new security measures seemed to matter much when a disturbed young man brought a legally purchased weapon to Robb and committed the deadliest school shooting in the state’s history. The perpetrator wasn’t a student and therefore couldn’t be monitored by its security systems.

Trolling through students’ online lives to look for signs of danger is now a routine procedure in many districts. In fact, legislators have discussed mandating such surveillance features for schools across the country. UCISD employed one such company, but Gov. Abbott said Wednesday that “there was no meaningful forewarning of this crime.” The shooter sent private messages threatening the attack via Facebook Messenger half an hour before it occurred, but they were private and therefore would have been invisible to outside observers.

Facial recognition is another technology that has been offered to schools as a basic safety mechanism. The number of schools that have adopted face recording solutions has risen precipitously in recent years (Clearview AI announced this week that it has its sights on cracking into the market). However, despite their growing popularity, there is little evidence that these security apparatuses actually do anything to stop school shootings. Even supporters of facial recognition admit that the systems probably won’t do much once a shooter’s on school property.

“Whether it’s facial recognition, monitoring software on school devices, cameras—all these types of surveillance have become extremely ubiquitous,” said Jason Kelley, digital strategist with the Electronic Frontier Foundation, in an interview with Gizmodo. “The companies that sell these tools are trying to do something positive—they’re trying to minimize tragedy,” he said. Yet not only can these products ultimately be ineffective, they can also end up having negative side-effects on the children they’re meant to protect, Kelley offered. The intrusiveness of the tools are such that students may grow up feeling as if they have to be surveilled to be safe—even if the surveillance isn’t actually keeping them safe.

Some studies suggest that what surveillance actually provides is punishment rather than protection. The cameras and software can turn schools into little panopticons, where student behavior is constantly analyzed and assessed, and where minor infractions can be spotted and disciplined.

209

10 US Government Agencies Plan Expanded Use of Facial Recognition

The Washington Post reports that the U.S. government “plans to expand its use of facial recognition to pursue criminals and scan for threats, an internal survey has found, even as concerns grow about the technology’s potential for contributing to improper surveillance and false arrests.”
Ten federal agencies — the departments of Agriculture, Commerce, Defense, Homeland Security, Health and Human Services, Interior, Justice, State, Treasury and Veterans Affairs — told the Government Accountability Office they intend to grow their facial recognition capabilities by 2023, the GAO said in a report posted to its website Tuesday. Most of the agencies use face-scanning technology so employees can unlock their phones and laptops or access buildings, though a growing number said they are using the software to track people and investigate crime. The Department of Agriculture, for instance, said it wants to use it to monitor live surveillance feeds at its facilities and send an alert if it spots any faces also found on a watch list…

The GAO said in June that 20 federal agencies have used either internally developed or privately run facial recognition software, even though 13 of those agencies said they did not “have awareness” of which private systems they used and had therefore “not fully assessed the potential risks … to privacy and accuracy.” In the current report, the GAO said several agencies, including the Justice Department, the Air Force and Immigration and Customs Enforcement, reported that they had used facial recognition software from Clearview AI, a firm that has faced lawsuits from privacy groups and legal demands from Google and Facebook after it copied billions of facial images from social media without their approval… Many federal agencies said they used the software by requesting that officials in state and local governments run searches on their own software and report the results. Many searches were routed through a nationwide network of “fusion centers,” which local police and federal investigators use to share information on potential threats or terrorist attacks…

U.S. Customs and Border Protection officials, who have called the technology “the way of the future,” said earlier this month that they had run facial recognition scans on more than 88 million travelers at airports, cruise ports and border crossings. The systems, the officials said, have detected 850 impostors since 2018 — or about 1 in every 103,000 faces scanned.

326

Man Wrongfully Arrested By Facial Recognition Tells Congress His Story

Michigan resident Robert Williams testified about being wrongfully arrested by Detroit Police in an effort to urge Congress to pass legislation against the use of facial recognition technology. Williams’ testimony was part of a hearing held by the House of Representatives’ subcommittee on crime, terrorism, and homeland security, which dealt with how law enforcement uses the highly controversial surveillance technology. Congress recently introduced the Facial Recognition and Biometric Technology Moratorium, which would indefinitely ban its use by law enforcement. Williams was wrongfully arrested in 2020 for federal larceny after he was misidentified by the Detroit Police Department’s facial recognition software after they used a grainy image from the surveillance footage. He was then picked from a photo lineup by the store security guard who wasn’t actually present for the incident. According to his testimony, Williams was detained for thirty hours and was not given any food or water. […]

Research has repeatedly shown that facial recognition technology is fundamentally biased against women and people of color, leading to errors like this. Even when working properly, privacy advocates have argued facial recognition systems disproportionately target communities of color, creating further pretext for police intervention. […] “Large scale adoption of this technology would inject further inequity into a system at a time when we should be moving to make the criminal justice system more equitable,” Representative Sheila Jackson Lee (TX-18) said during the hearing. The subcommittee also referenced a recent study from the U.S. Government Accountability Office that reported that 20 federal agencies used facial recognition software last year. Six federal agencies, including the FBI and the U.S. Postal Service, reported using it during the 2020 Black Lives Matter protests that followed the police murder of George Floyd.

Robert Williams is just one of many people impacted by this technology’s errors and biases. Williams is now represented by the ACLU and is suing the Detroit Police Department for damages and policy changes to prohibit the use of facial recognition technology. So far, the technology has been banned statewide in Vermont and Virginia, as well as in 20 cities across the US. “Mr. Williams deserved better from the law enforcement agencies entrusted to use a technology that we all know is less accurate when applied to citizens who look like him,” House Judiciary Committee Chairman Jerrold Nadler (D-NY) said in his testimony.

323

Chinese Facial Recognition Company With Silicon Valley Funding

DeepGlint, also known as Beijing Geling Shentong Information Technology Co., Ltd., is a facial recognition company with deep ties to Chinese police surveillance, and funding from US-based Sequoia Capital. Today the Commerce Department added it to its Entity List, which restricts US companies from doing business with listed firms without a special license. Sequoia did not immediately respond to a request for comment. DeepGlint co-founded a facial recognition lab in 2018 with Chinese authorities in Urumqi, the capital of Xinjiang, according to the South China Morning Post. It has also gained international bragging rights through the US National Institute of Standards and Technology’s (NIST) Face Recognition Vendor Test. DeepGlint claimed top accuracy in the test as of January 2021, giving it a potent marketing tool in the security and surveillance industry. While DeepGlint has been accepted for a public offering on Shanghai’s STAR stock exchange, the firm hasn’t seen the commercial success of other AI startups in the country, explained Jeffrey Ding in his ChinAI newsletter last month. Since the firm is so heavily invested in government work, it has to follow slow government procurement cycles and is unlikely to score huge infrastructure projects, Ding writes.

383

SA govt trials home quarantine app with facial recognition, GPS tracking

The South Australian government is set to trial a smartphone app that uses facial recognition and geolocation to confirm a person’s location during home quarantine.

The government will trial the app known as Home Quarantine SA with 50 residents who are required to undertake 14 days of quarantine on return from interstate locations this week.

The app will allow trial participants to “confirm their location with regular check-ins” through a live face check-in feature that uses facial recognition and location verification.

It gives individuals 15 minutes to respond to the random location verification requests at their registered location.

329

Researchers Create ‘Master Faces’ To Bypass Facial Recognition

Researchers have demonstrated a method to create “master faces,” computer generated faces that act like master keys for facial recognition systems, and can impersonate several identities with what the researchers claim is a high probability of success. In their paper (PDF), researchers at the Blavatnik School of Computer Science and the School of Electrical Engineering in Tel Aviv detail how they successfully created nine “master key” faces that are able to impersonate almost half the faces in a dataset of three leading face recognition systems. The researchers say their results show these master faces can successfully impersonate over 40 percent of the population in these systems without any additional information or data of the person they are identifying.

The researchers tested their methods against three deep face recognition systems — Dlib, FaceNet, and SphereFace. Lead author Ron Shmelkin told Motherboard that they used these systems because they are capable of recognizing “high-level semantic features” of the faces that are more sophisticated than just skin color or lighting effects. The researchers used a StyleGAN to generate the faces and then used an evolutionary algorithm and neural network to optimize and predict their success. The evolutionary strategy then creates iterations, or generations, of candidates of varying success rates. The researchers then used the algorithm to train a neural network, to classify the best candidates as the most promising ones. This is what teaches it to predict candidates’ success and, in turn, direct the algorithm to generate better candidates with a higher probability of passing. The researchers even predict that their master faces could be animated using deepfake technology to bypass liveness detection, which is used to determine whether a biometric sample is real or fake.

259

Police In At Least 24 Countries Have Used Clearview AI. Find Out Which Ones Here.

Law enforcement agencies and government organizations from 24 countries outside the United States used a controversial facial recognition technology called Clearview AI, according to internal company data reviewed by BuzzFeed News.

That data, which runs up until February 2020, shows that police departments, prosecutors’ offices, universities, and interior ministries from around the world ran nearly 14,000 searches with Clearview AI’s software. At many law enforcement agencies from Canada to Finland, officers used the software without their higher-ups’ knowledge or permission. After receiving questions from BuzzFeed News, some organizations admitted that the technology had been used without leadership oversight.

274

For Million of Americans, Unemployment Benefits Require Facial Recognition Scanning

Millions of Americans “are being instructed to use ID.me, along with its facial recognition software, to get their unemployment benefits,” reports CNN. The software compares their photo ID with a selfie video they take on their phone with the company’s software — but some privacy advocates are concerned.

A rapidly growing number of U.S. states, including Colorado, California and New York, turned to ID.me in hopes of cutting down on a surge of fraudulent claims for state and federal benefits that cropped up during the pandemic alongside a tidal wave of authentic unemployment claims. As of this month, 27 states’ unemployment agencies had entered contracts with ID.me, according to the company, with 25 of them already using its technology. ID.me said it is in talks with seven more…

The company’s rapid advance at state unemployment agencies marks the latest chapter in the story of facial recognition software’s spread across the United States. It also highlights how this controversial technology gained a foothold during the pandemic and now appears destined to remain part of our lives for the foreseeable future…

Several ID.me users told CNN Business about problems they had verifying their identities with the company, which ranged from the facial recognition technology failing to recognize their face to waiting for hours to reach a human for a video chat after encountering problems with the technology. A number of people who claim to have had issues with ID.me have taken to social media to beg the company for help with verification, express their own concerns about its face-data collection or simply rant, often in response to ID.me’s own posts on Twitter… From ID.me’s perspective, its service is making it easier for a wide range of people to access essential government services, as it avoids the common practice of using information gleaned from data brokers and credit bureaus as a means of checking identities. The company said this lets it give a green light to those who don’t have a credit history, or may have changed their name, for instance — people who might otherwise have more trouble getting verified.

However, it doesn’t sit well with employee and privacy advocates and civil rights groups interviewed by CNN Business. They have concerns about the facial recognition technology itself and for the ID.me verification process’s reliance on access to a smartphone or computer and the internet, which may be out of reach for the people to whom unemployment dollars are most critical… ID.me said it does not sell user data — which includes biometric and related information such as selfies people upload, data related to facial analyses, and recordings of video chats users participate in with ID.me — but it does keep it. Biometric data, like the facial geometry produced from a user’s selfie, may be kept for years after a user closes their account… In March, ID.me announced raising $100 million in funding from investors including hedge fund Viking Global Investors and CapitalG, which is Google parent company Alphabet’s independent growth fund. With that funding round, ID.me said it was valued at $1.5 billion… “We’re verifying more than 1% of the American adult population each quarter, and that’s starting to compress more to like 45 or 50 days,” Hall said. The company has more than 50 million users, he said, and signs up more than 230,000 new ones each day.

CNN also quotes a man who complains the state never gave him an option. “If I wanted unemployment, I had no choice but to do this.”

300

Maine Passes Facial Recognition

The new law prohibits government use of facial recognition except in specifically outlined situations, with the most broad exception being if police have probable cause that an unidentified person in an image committed a serious crime, or for proactive fraud prevention. Since Maine police will not have access to facial recognition, they will be able to ask the FBI and Maine Bureau of Motor Vehicles (BMV) to run these searches.

Crucially, the law plugs loopholes that police have used in the past to gain access to the technology, like informally asking other agencies or third parties to run backchannel searches for them. Logs of all facial recognition searches by the BMV must be created and are designated as public records. The only other state-wide facial recognition law was enacted by Washington in 2020, but many privacy advocates were dissatisfied with the specifics of the law. Maine’s new law also gives citizens the ability to sue the state if they’ve been unlawfully targeted by facial recognition, which was notably absent from Washington’s regulation. If facial recognition searches are performed illegally, they must be deleted and cannot be used as evidence.

333

A Government Watchdog May Have Missed Clearview AI Use By Five Federal Agencies

A government inquiry into federal agencies’ deployment of facial recognition may have overlooked some organizations’ use of popular biometric identification software Clearview AI, calling into question whether authorities can understand the extent to which the emerging technology has been used by taxpayer-funded entities. In a 92-page report published by the Government Accountability Office on Tuesday, five agencies — the US Capitol Police, the US Probation Office, the Pentagon Force Protection Agency, Transportation Security Administration, and the Criminal Investigation Division at the Internal Revenue Service — said they didn’t use Clearview AI between April 2018 and March 2020. This, however, contradicts internal Clearview data previously reviewed by BuzzFeed News.

In April, BuzzFeed News revealed that those five agencies were among more than 1,800 US taxpayer-funded entities that had employees who tried or used Clearview AI, based on internal company data. As part of that story, BuzzFeed News published a searchable table disclosing all the federal, state, and city government organizations whose employees are listed in the data as having used the facial recognition software as of February 2020. While the GAO was tasked with “review[ing] federal law enforcement use of facial recognition technology,” the discrepancies between the report, which was based on survey responses and BuzzFeed News’ past reporting, suggest that even the US government may not be equipped to track how its own agencies access to surveillance tools like Clearview. The GAO report surveyed 42 federal agencies in total, 20 of which reported that they either owned their own facial recognition system or used one developed by a third party between April 2018 and March 2020. Ten federal agencies — including Immigration and Customs Enforcement and Customs and Border Protection — said they specifically used Clearview AI.

312

New Site Extracts and Posts Every Face from Parler’s Capitol Hill Insurrection Videos

“Late last week, a website called Faces of the Riot appeared online, showing nothing but a vast grid of more than 6,000 images of faces, each one tagged only with a string of characters associated with the Parler video in which it appeared,” reports WIRED, saying the site raises clear privacy concerns:
The site’s creator tells WIRED that he used simple, open source machine-learning and facial recognition software to detect, extract, and deduplicate every face from the 827 videos that were posted to Parler from inside and outside the Capitol building on January 6, the day when radicalized Trump supporters stormed the building in a riot that resulted in five people’s deaths. The creator of Faces of the Riot says his goal is to allow anyone to easily sort through the faces pulled from those videos to identify someone they may know, or recognize who took part in the mob, or even to reference the collected faces against FBI wanted posters and send a tip to law enforcement if they spot someone… “It’s entirely possible that a lot of people who were on this website now will face real-life consequences for their actions….”

A recent upgrade to the site adds hyperlinks from faces to the video source, so that visitors can click on any face and see what the person was filmed doing on Parler. The Faces of the Riot creator, who says he’s a college student in the “greater DC area,” intends that added feature to help contextualize every face’s inclusion on the site and differentiate between bystanders, peaceful protesters, and violent insurrectionists. He concedes that he and a co-creator are still working to scrub “non-rioter” faces, including those of police and press who were present. A message at the top of the site also warns against vigilante investigations, instead suggesting users report those they recognize to the FBI, with a link to an FBI tip page….

McDonald has previously both criticized the power of facial recognition technology and himself implemented facial recognition projects like ICEspy, a tool he launched in 2018 for identifying agents of the Immigration and Customs Enforcement agency… He sees Faces of the Riot as “playing it really safe” compared even to his own facial recognition experiments, given that it doesn’t seek to link faces with named identities. “And I think it’s a good call because I don’t think that we need to legitimize this technology any more than it already is and has been falsely legitimized,” McDonald says.

But McDonald also points out that Faces of the Riot demonstrates just how accessible facial recognition technologies have become. “It shows how this tool that has been restricted only to people who have the most education, the most power, the most privilege is now in this more democratized state,” McDonald says.

431

France Bans Use of Drones To Police Protests In Paris

The Council of State said Paris police prefect Didier Lallement should halt “without delay” drone surveillance of gatherings on public roads. The ruling comes weeks after MPs backed a controversial security bill that includes police use of drones. Its main aim is to regulate how people share film or photos of police.

Privacy rights group La Quadrature du Net (LQDN) has argued that the bill’s main measures violate freedom of expression and that drones equipped with cameras cannot keep the peace but track individuals instead. The Council of State ruled there was “serious doubt over the legality” of drones without a prior text authorizing and setting out their use. LQDN said the only way the government could legalize drone surveillance now was in providing “impossible proof” that it was absolutely necessary to maintain law and order. The decision is the second setback in months for Parisian authorities’ drone plans. In May, the same court ruled that drones could not be used in the capital to track people in breach of France’s strict lockdown rules.

444

Police Are Using Facial Recognition For Minor Crimes, ‘Because They Can’

In a recent court filing, the New York police department noted that it’s turned to facial recognition in more than 22,000 cases in the last three years. “Even though the NYPD claims facial recognition is only used for serious crimes, the numbers tell a different story,” said Albert Fox Cahn, the executive director of the Surveillance Technology Oversight Project. “As facial recognition continues to grow, it’s being routinely deployed for everything from shoplifting to graffiti.”

Asked for comment, an NYPD spokeswoman pointed to a 2019 opinion article by police commissioner James O’Neill titled “How Facial Recognition Makes You Safer.” In the piece, O’Neill talked about how facial recognition had been used to make arrests in murder, robbery and rape cases, but he didn’t disclose how often it was used for low-level crimes. The department’s facial recognition policy, established in March, allows the technology to be used for any crime, no matter the severity. Without any limits, police have more frequently used the technology for petty thefts than the dangerous crimes, privacy advocates say. Before Amazon put a moratorium on police use of its Rekognition face-identifying software, the program was used in a $12 shoplifting case in Oregon in 2018…

Without any limits, police can use facial recognition however they please, and in many cases, arrested suspects don’t even know that the flawed technology was used… Attorneys representing protesters in Miami didn’t know that police used facial recognition in their arrests, according to an NBC Miami report. Police used facial recognition software in a $50 drug dealing case in Florida in 2016 but made no mention of it in the arrest report.

The article also notes that as recently as this Tuesday, Hoan Ton-That, the CEO of facial recognition startup Clearview AI “said it isn’t the company’s responsibility to make sure its technology is being properly used by its thousands of police partners.

“Though the company has its own guidelines, Ton-That said Clearview AI wouldn’t be enforcing them, saying that ‘it’s not our job to set the policy as a tech company…'”

443

Facial Recognition Designed To Detect Around Face Masks Is Failing, Study Finds

Many facial recognition companies have claimed they can identify people with pinpoint accuracy even while they’re wearing face masks, but the latest results from a study show that the coverings are dramatically increasing error rates.

In an update Tuesday, the US National Institute of Standards and Technology looked at 41 facial recognition algorithms submitted after the COVID-19 pandemic was declared in mid-March. Many of these algorithms were designed with face masks in mind, and claimed that they were still able to accurately identify people, even when half of their face was covered. In July, NIST released a report noting that face masks were thwarting regular facial recognition algorithms, with error rates ranging from 5% to 50%. NIST is widely considered the leading authority on facial recognition accuracy testing, and expected algorithms to improve on identifying people in face masks. That day has yet to come, as every algorithm experienced at least marginal increases in error rates once masks came into the picture. While some algorithms still had accuracy overall, like Chinese facial recognition company Dahua’s algorithm error rate going from 0.3% without masks to 6% with masks, others had error rates that increased up to 99%.

Rank One, a facial recognition provider used in cities like Detroit, had an error rate of 0.6% without masks, and a 34.5% error rate once masks were digitally applied. In May, the company started offering “periocular recognition,” which claimed to be able to identify people just off their eyes and nose. TrueFace, which is used in schools and on Air Force bases, saw its algorithm error rate go from 0.9% to 34.8% once masks were added. The company’s CEO, Shaun Moore, told CNN on Aug. 12 that its researchers were working on a better algorithm for detecting beyond mas

480

Clearview AI CEO Says ‘Over 2,400 Police Agencies’ Are Using Its Facial Recognition Software

More than 2,400 police agencies have entered contracts with Clearview AI, a controversial facial recognition firm, according to comments made by Clearview AI CEO Hoan Ton-That in an interview with Jason Calacanis on YouTube.

The hour-long interview references an investigation by The New York Times published in January, which detailed how Clearview AI scraped data from sites including Facebook, YouTube, and Venmo to build its database. The scale of that database and the methods used to construct it were already controversial before the summer of protests against police violence. “It’s an honor to be at the center of the debate now and talk about privacy,” Ton-That says in the interview, going on to call the Times investigation “actually extremely fair.” “Since then, there’s been a lot of controversy, but fundamentally, this is such a great tool for society,” Ton-That says.

Ton-That also gave a few more details on how the business runs. Clearview is paid depending on how many licenses a client adds, among other factors, but Ton-That describes the licenses as “pretty inexpensive, compared to what’s come previously” in his interview. Ton-That ballparks Clearview’s fees as $2,000 a year for each officer with access. According to Ton-That, Clearview AI is primarily used by detectives.

Clearview AI was used at least once to identify protesters in Miami.

Facial recognition was also used by the New York Police Department to arrest an activist during the Black Lives Matter uprising this summer. According to a BuzzFeed News report in February, NYPD was at the time the largest user of Clearview AI — where more than 30 officers had Clearview accounts.

450

Police in Several US Cities Used Facial Recognition To Hunt Down and Arrest Protesters

Law enforcement in several cities, including New York and Miami, have reportedly been using controversial facial recognition software to track down and arrest individuals who allegedly participated in criminal activity during Black Lives Matter protests months after the fact. Miami police used Clearview AI to identify and arrest a woman for allegedly throwing a rock at a police officer during a May protest, local NBC affiliate WTVJ reported this week…

Similar reports have surfaced from around the country in recent weeks. Police in Columbia, South Carolina, and the surrounding county likewise used facial recognition, though from a different vendor, to arrest several protesters after the fact, according to local paper The State. Investigators in Philadelphia also used facial recognition software, from a third vendor, to identify protestors from photos posted to Instagram, The Philadelphia Inquirer reported.

432

Emotion Recognition Tech Should Be Banned, Says an AI Research Institute

A leading research centre has called for new laws to restrict the use of emotion-detecting tech. The AI Now Institute says the field is “built on markedly shaky foundations.” Despite this, systems are on sale to help vet job seekers, test criminal suspects for signs of deception, and set insurance prices. It wants such software to be banned from use in important decisions that affect people’s lives and/or determine their access to opportunities. The US-based body has found support in the UK from the founder of a company developing its own emotional-response technologies — but it cautioned that any restrictions would need to be nuanced enough not to hamper all work being done in the area.

AI Now refers to the technology by its formal name, affect recognition, in its annual report. It says the sector is undergoing a period of significant growth and could already be worth as much as $20 billion. “It claims to read, if you will, our inner-emotional states by interpreting the micro-expressions on our face, the tone of our voice or even the way that we walk,” explained co-founder Prof Kate Crawford. “It’s being used everywhere, from how do you hire the perfect employee through to assessing patient pain, through to tracking which students seem to be paying attention in class. “At the same time as these technologies are being rolled out, large numbers of studies are showing that there is… no substantial evidence that people have this consistent relationship between the emotion that you are feeling and the way that your face looks.”

529