Resources

Stare Into The Lights My Pretties >

Maine Passes Facial Recognition

The new law prohibits government use of facial recognition except in specifically outlined situations, with the most broad exception being if police have probable cause that an unidentified person in an image committed a serious crime, or for proactive fraud prevention. Since Maine police will not have access to facial recognition, they will be able to ask the FBI and Maine Bureau of Motor Vehicles (BMV) to run these searches.

Crucially, the law plugs loopholes that police have used in the past to gain access to the technology, like informally asking other agencies or third parties to run backchannel searches for them. Logs of all facial recognition searches by the BMV must be created and are designated as public records. The only other state-wide facial recognition law was enacted by Washington in 2020, but many privacy advocates were dissatisfied with the specifics of the law. Maine’s new law also gives citizens the ability to sue the state if they’ve been unlawfully targeted by facial recognition, which was notably absent from Washington’s regulation. If facial recognition searches are performed illegally, they must be deleted and cannot be used as evidence.

A Government Watchdog May Have Missed Clearview AI Use By Five Federal Agencies

A government inquiry into federal agencies’ deployment of facial recognition may have overlooked some organizations’ use of popular biometric identification software Clearview AI, calling into question whether authorities can understand the extent to which the emerging technology has been used by taxpayer-funded entities. In a 92-page report published by the Government Accountability Office on Tuesday, five agencies — the US Capitol Police, the US Probation Office, the Pentagon Force Protection Agency, Transportation Security Administration, and the Criminal Investigation Division at the Internal Revenue Service — said they didn’t use Clearview AI between April 2018 and March 2020. This, however, contradicts internal Clearview data previously reviewed by BuzzFeed News.

In April, BuzzFeed News revealed that those five agencies were among more than 1,800 US taxpayer-funded entities that had employees who tried or used Clearview AI, based on internal company data. As part of that story, BuzzFeed News published a searchable table disclosing all the federal, state, and city government organizations whose employees are listed in the data as having used the facial recognition software as of February 2020. While the GAO was tasked with “review[ing] federal law enforcement use of facial recognition technology,” the discrepancies between the report, which was based on survey responses and BuzzFeed News’ past reporting, suggest that even the US government may not be equipped to track how its own agencies access to surveillance tools like Clearview. The GAO report surveyed 42 federal agencies in total, 20 of which reported that they either owned their own facial recognition system or used one developed by a third party between April 2018 and March 2020. Ten federal agencies — including Immigration and Customs Enforcement and Customs and Border Protection — said they specifically used Clearview AI.

New Site Extracts and Posts Every Face from Parler’s Capitol Hill Insurrection Videos

“Late last week, a website called Faces of the Riot appeared online, showing nothing but a vast grid of more than 6,000 images of faces, each one tagged only with a string of characters associated with the Parler video in which it appeared,” reports WIRED, saying the site raises clear privacy concerns:
The site’s creator tells WIRED that he used simple, open source machine-learning and facial recognition software to detect, extract, and deduplicate every face from the 827 videos that were posted to Parler from inside and outside the Capitol building on January 6, the day when radicalized Trump supporters stormed the building in a riot that resulted in five people’s deaths. The creator of Faces of the Riot says his goal is to allow anyone to easily sort through the faces pulled from those videos to identify someone they may know, or recognize who took part in the mob, or even to reference the collected faces against FBI wanted posters and send a tip to law enforcement if they spot someone… “It’s entirely possible that a lot of people who were on this website now will face real-life consequences for their actions….”

A recent upgrade to the site adds hyperlinks from faces to the video source, so that visitors can click on any face and see what the person was filmed doing on Parler. The Faces of the Riot creator, who says he’s a college student in the “greater DC area,” intends that added feature to help contextualize every face’s inclusion on the site and differentiate between bystanders, peaceful protesters, and violent insurrectionists. He concedes that he and a co-creator are still working to scrub “non-rioter” faces, including those of police and press who were present. A message at the top of the site also warns against vigilante investigations, instead suggesting users report those they recognize to the FBI, with a link to an FBI tip page….

McDonald has previously both criticized the power of facial recognition technology and himself implemented facial recognition projects like ICEspy, a tool he launched in 2018 for identifying agents of the Immigration and Customs Enforcement agency… He sees Faces of the Riot as “playing it really safe” compared even to his own facial recognition experiments, given that it doesn’t seek to link faces with named identities. “And I think it’s a good call because I don’t think that we need to legitimize this technology any more than it already is and has been falsely legitimized,” McDonald says.

But McDonald also points out that Faces of the Riot demonstrates just how accessible facial recognition technologies have become. “It shows how this tool that has been restricted only to people who have the most education, the most power, the most privilege is now in this more democratized state,” McDonald says.

France Bans Use of Drones To Police Protests In Paris

The Council of State said Paris police prefect Didier Lallement should halt “without delay” drone surveillance of gatherings on public roads. The ruling comes weeks after MPs backed a controversial security bill that includes police use of drones. Its main aim is to regulate how people share film or photos of police.

Privacy rights group La Quadrature du Net (LQDN) has argued that the bill’s main measures violate freedom of expression and that drones equipped with cameras cannot keep the peace but track individuals instead. The Council of State ruled there was “serious doubt over the legality” of drones without a prior text authorizing and setting out their use. LQDN said the only way the government could legalize drone surveillance now was in providing “impossible proof” that it was absolutely necessary to maintain law and order. The decision is the second setback in months for Parisian authorities’ drone plans. In May, the same court ruled that drones could not be used in the capital to track people in breach of France’s strict lockdown rules.

Police Are Using Facial Recognition For Minor Crimes, ‘Because They Can’

In a recent court filing, the New York police department noted that it’s turned to facial recognition in more than 22,000 cases in the last three years. “Even though the NYPD claims facial recognition is only used for serious crimes, the numbers tell a different story,” said Albert Fox Cahn, the executive director of the Surveillance Technology Oversight Project. “As facial recognition continues to grow, it’s being routinely deployed for everything from shoplifting to graffiti.”

Asked for comment, an NYPD spokeswoman pointed to a 2019 opinion article by police commissioner James O’Neill titled “How Facial Recognition Makes You Safer.” In the piece, O’Neill talked about how facial recognition had been used to make arrests in murder, robbery and rape cases, but he didn’t disclose how often it was used for low-level crimes. The department’s facial recognition policy, established in March, allows the technology to be used for any crime, no matter the severity. Without any limits, police have more frequently used the technology for petty thefts than the dangerous crimes, privacy advocates say. Before Amazon put a moratorium on police use of its Rekognition face-identifying software, the program was used in a $12 shoplifting case in Oregon in 2018…

Without any limits, police can use facial recognition however they please, and in many cases, arrested suspects don’t even know that the flawed technology was used… Attorneys representing protesters in Miami didn’t know that police used facial recognition in their arrests, according to an NBC Miami report. Police used facial recognition software in a $50 drug dealing case in Florida in 2016 but made no mention of it in the arrest report.

The article also notes that as recently as this Tuesday, Hoan Ton-That, the CEO of facial recognition startup Clearview AI “said it isn’t the company’s responsibility to make sure its technology is being properly used by its thousands of police partners.

“Though the company has its own guidelines, Ton-That said Clearview AI wouldn’t be enforcing them, saying that ‘it’s not our job to set the policy as a tech company…'”

Facial Recognition Designed To Detect Around Face Masks Is Failing, Study Finds

Many facial recognition companies have claimed they can identify people with pinpoint accuracy even while they’re wearing face masks, but the latest results from a study show that the coverings are dramatically increasing error rates.

In an update Tuesday, the US National Institute of Standards and Technology looked at 41 facial recognition algorithms submitted after the COVID-19 pandemic was declared in mid-March. Many of these algorithms were designed with face masks in mind, and claimed that they were still able to accurately identify people, even when half of their face was covered. In July, NIST released a report noting that face masks were thwarting regular facial recognition algorithms, with error rates ranging from 5% to 50%. NIST is widely considered the leading authority on facial recognition accuracy testing, and expected algorithms to improve on identifying people in face masks. That day has yet to come, as every algorithm experienced at least marginal increases in error rates once masks came into the picture. While some algorithms still had accuracy overall, like Chinese facial recognition company Dahua’s algorithm error rate going from 0.3% without masks to 6% with masks, others had error rates that increased up to 99%.

Rank One, a facial recognition provider used in cities like Detroit, had an error rate of 0.6% without masks, and a 34.5% error rate once masks were digitally applied. In May, the company started offering “periocular recognition,” which claimed to be able to identify people just off their eyes and nose. TrueFace, which is used in schools and on Air Force bases, saw its algorithm error rate go from 0.9% to 34.8% once masks were added. The company’s CEO, Shaun Moore, told CNN on Aug. 12 that its researchers were working on a better algorithm for detecting beyond mas

Clearview AI CEO Says ‘Over 2,400 Police Agencies’ Are Using Its Facial Recognition Software

More than 2,400 police agencies have entered contracts with Clearview AI, a controversial facial recognition firm, according to comments made by Clearview AI CEO Hoan Ton-That in an interview with Jason Calacanis on YouTube.

The hour-long interview references an investigation by The New York Times published in January, which detailed how Clearview AI scraped data from sites including Facebook, YouTube, and Venmo to build its database. The scale of that database and the methods used to construct it were already controversial before the summer of protests against police violence. “It’s an honor to be at the center of the debate now and talk about privacy,” Ton-That says in the interview, going on to call the Times investigation “actually extremely fair.” “Since then, there’s been a lot of controversy, but fundamentally, this is such a great tool for society,” Ton-That says.

Ton-That also gave a few more details on how the business runs. Clearview is paid depending on how many licenses a client adds, among other factors, but Ton-That describes the licenses as “pretty inexpensive, compared to what’s come previously” in his interview. Ton-That ballparks Clearview’s fees as $2,000 a year for each officer with access. According to Ton-That, Clearview AI is primarily used by detectives.

Clearview AI was used at least once to identify protesters in Miami.

Facial recognition was also used by the New York Police Department to arrest an activist during the Black Lives Matter uprising this summer. According to a BuzzFeed News report in February, NYPD was at the time the largest user of Clearview AI — where more than 30 officers had Clearview accounts.

Police in Several US Cities Used Facial Recognition To Hunt Down and Arrest Protesters

Law enforcement in several cities, including New York and Miami, have reportedly been using controversial facial recognition software to track down and arrest individuals who allegedly participated in criminal activity during Black Lives Matter protests months after the fact. Miami police used Clearview AI to identify and arrest a woman for allegedly throwing a rock at a police officer during a May protest, local NBC affiliate WTVJ reported this week…

Similar reports have surfaced from around the country in recent weeks. Police in Columbia, South Carolina, and the surrounding county likewise used facial recognition, though from a different vendor, to arrest several protesters after the fact, according to local paper The State. Investigators in Philadelphia also used facial recognition software, from a third vendor, to identify protestors from photos posted to Instagram, The Philadelphia Inquirer reported.

Emotion Recognition Tech Should Be Banned, Says an AI Research Institute

A leading research centre has called for new laws to restrict the use of emotion-detecting tech. The AI Now Institute says the field is “built on markedly shaky foundations.” Despite this, systems are on sale to help vet job seekers, test criminal suspects for signs of deception, and set insurance prices. It wants such software to be banned from use in important decisions that affect people’s lives and/or determine their access to opportunities. The US-based body has found support in the UK from the founder of a company developing its own emotional-response technologies — but it cautioned that any restrictions would need to be nuanced enough not to hamper all work being done in the area.

AI Now refers to the technology by its formal name, affect recognition, in its annual report. It says the sector is undergoing a period of significant growth and could already be worth as much as $20 billion. “It claims to read, if you will, our inner-emotional states by interpreting the micro-expressions on our face, the tone of our voice or even the way that we walk,” explained co-founder Prof Kate Crawford. “It’s being used everywhere, from how do you hire the perfect employee through to assessing patient pain, through to tracking which students seem to be paying attention in class. “At the same time as these technologies are being rolled out, large numbers of studies are showing that there is… no substantial evidence that people have this consistent relationship between the emotion that you are feeling and the way that your face looks.”

United States’ Department of Homeland Security Will Soon Have Biometric Data On Nearly 260 Million People

The U.S. Department of Homeland Security (DHS) expects to have face, fingerprint, and iris scans of at least 259 million people in its biometrics database by 2022, according to a recent presentation from the agency’s Office of Procurement Operations reviewed by Quartz. That’s about 40 million more than the agency’s 2017 projections, which estimated 220 million unique identities by 2022, according to previous figures cited by the Electronic Frontier Foundation (EFF), a San Francisco-based privacy rights nonprofit.

A slide deck, shared with attendees at an Oct. 30 DHS industry day, includes a breakdown of what its systems currently contain, as well as an estimate of what the next few years will bring. The agency is transitioning from a legacy system called IDENT to a cloud-based system (hosted by Amazon Web Services) known as Homeland Advanced Recognition Technology, or HART. The biometrics collection maintained by DHS is the world’s second-largest, behind only India’s countrywide biometric ID network in size. The traveler data kept by DHS is shared with other U.S. agencies, state and local law enforcement, as well as foreign governments.

Vimeo Sued For Storing Faceprints of People Without Their Consent

Vimeo is collecting and storing thousands of people’s facial biometrics without their permission or knowledge, according to a complaint filed on September 20 on behalf of potentially thousands of plaintiffs under the Illinois Biometric Information Privacy Act (BIPA).

The suit takes aim at Vimeo’s Magisto application: a short-form video creation platform purchased by Vimeo in April 2019 that uses facial recognition to automatically index the faces of people in videos so they can be face-tagged. BIPA bans collecting and storing biometric data without explicit consent, including “faceprints.” The complaint against Vimeo claims that users of Magisto “upload millions of videos and/or photos per day, making videos and photographs a vital part of the Magisto experience.”

The complaint maintains that unbeknownst to the average consumer, Magisto scans “each and every video and photo uploaded to Magisto for faces” and analyzes “biometric identifiers,” including facial geometry, to “create and store a template for each face.” That template is later used to “organize and group together videos based upon the particular individuals appearing in the videos” by “comparing the face templates of individuals who appear in newly-edited videos or photos with the facial templates already saved in Magisto’s face database.”

The complaint also asserts that Magisto analyzes and face-matches the biometrics of non-Magisto users who happen to appear in the photos and videos, which is a violation of BIPA.

A Researcher Attempted To Opt Out of Facial Recognition at the Airport — It Wasn’t Easy

The announcement came as we began to board. Last month, I was at Detroit’s Metro Airport for a connecting flight to Southeast Asia. I listened as a Delta Air Lines staff member informed passengers that the boarding process would use facial recognition instead of passport scanners. As a privacy-conscious person, I was uncomfortable boarding this way. I also knew I could opt out. Presumably, most of my fellow fliers did not: I didn’t hear a single announcement alerting passengers how to avoid the face scanners.

To figure out how to do so, I had to leave the boarding line, speak with a Delta representative at their information desk, get back in line, then request a passport scan when it was my turn to board. Federal agencies and airlines claim that facial recognition is an opt-out system, but my recent experience suggests they are incentivizing travelers to have their faces scanned — and disincentivizing them to sidestep the tech — by not clearly communicating alternative options. Last year, a Delta customer service representative reported that only 2 percent of customers opt out of facial-recognition. It’s easy to see why.

Police using Google Images + Facial Recognition

“The New York Police Department used a photo of Woody Harrelson in its facial recognition program in an attempt to identify a beer thief who looked like the actor,” reports the Associated Press:

Georgetown University’s Center on Privacy and Technology highlighted the April 2017 episode in “Garbage In, Garbage Out,” a report on what it says are flawed practices in law enforcement’s use of facial recognition. The report says security footage of the thief was too pixelated and produced no matches while high-quality images of Harrelson, a three-time Oscar nominee, returned several possible matches and led to one arrest.

The NYPD also used a photo of a New York Knicks player to search its database for a man wanted for a Brooklyn assault, the report said.

“The stakes are too high in criminal investigations to rely on unreliable â” or wrong â” inputs,” Georgetown researcher Clare Garvie wrote…. The Georgetown report says facial recognition has helped the NYPD crack about 2,900 cases in more than five years of using the technology.

And in Florida, Vice reports, law enforcement agencies “run roughly 8,000 of these searches per month.”

Facial Recognition to board a plane

A boarding technology for travelers using JetBlue is causing controversy due to a social media thread on the airline’s use of facial recognition. Last week, traveler MacKenzie Fegan described her experience with the biometric technology in a social media post that got the attention of JetBlue’s official account. She began: “I just boarded an international @JetBlue flight. Instead of scanning my boarding pass or handing over my passport, I looked into a camera before being allowed down the jet bridge. Did facial recognition replace boarding passes, unbeknownst to me? Did I consent to this?” JetBlue was ready to offer Twitterized sympathy: “You’re able to opt out of this procedure, MacKenzie. Sorry if this made you feel uncomfortable.”

But once you start thinking about these things, your thoughts become darker. Fegan wanted to know how JetBlue knew what she looked like. JetBlue explained: “The information is provided by the United States Department of Homeland Security from existing holdings.” Fegan wondered by what right a private company suddenly had her bioemtric data. JetBlue insisted it doesn’t have access to the data. It’s “securely transmitted to the Customs and Border Protection database.” Fegan wanted to know how this could have possibly happened so quickly. Could it be that in just a few seconds her biometric data was whipped “securely” around government departments so that she would be allowed on the plane? JetBlue referred her to an article on the subject, which was a touch on the happy-PR side. Fegan was moved, but not positively, by the phrase “there is no pre-registration required.”

Microsoft Turned Down Facial-Recognition Sales over “Human Rights Concerns”

Microsoft recently rejected a California law enforcement agency’s request to install facial recognition technology in officers’ cars and body cameras due to human rights concerns, company President Brad Smith said on Tuesday. Microsoft concluded it would lead to innocent women and minorities being disproportionately held for questioning because the artificial intelligence has been trained on mostly white and male pictures. AI has more cases of mistaken identity with women and minorities, multiple research projects have found.

Smith explained the decisions as part of a commitment to human rights that he said was increasingly critical as rapid technological advances empower governments to conduct blanket surveillance, deploy autonomous weapons and take other steps that might prove impossible to reverse. Smith also said at a Stanford University conference that Microsoft had declined a deal to install facial recognition on cameras blanketing the capital city of an unnamed country that the nonprofit Freedom House had deemed not free. Smith said it would have suppressed freedom of assembly there.

On the other hand, Microsoft did agree to provide the technology to an American prison, after the company concluded that the environment would be limited and that it would improve safety inside the unnamed institution.

You Will Soon Be Able To Pay Your Subway Fare With Your Face in China

China has led the world in adoption of smartphone-based mobile payments to the point where the central bank had to remind merchants not to discriminate against cash. The next phase of development may be to pay with your face.

In Shenzhen, the local subway operator is testing various advanced technologies backed by the ultra-fast 5G network, including facial-recognition ticketing.

At the Futian station, instead of presenting a ticket or scanning a QR bar code on their smartphones, commuters can scan their faces on a tablet-sized screen mounted on the entrance gate and have the fare automatically deducted from their linked accounts.

Currently in a trial mode, the facial-recognition ticketing service could in future help improve the efficiency of handling the up to 5 million rides per day on the city’s subway network. Shenzhen Metro did not elaborate when it will roll out the facial payment service.

The introduction of facial recognition-and-payment services to the public transit system marks another step by China toward integrating facial recognition and other artificial intelligence-based technology into everyday life in the world’s most populous nation.

Consumers can already pay for fried chicken at KFC in China with its “Smile to Pay” facial recognition system, first introduced at an outlet in Hangzhou in January 2017.

“To use facial ticketing in the future, passengers will also need preregistration of their facial information and link their payment methods to their accounts, just like them making payments at the KFC restaurant,” said a staff member at the Futian station’s demonstration area in Shenzhen.
China may use facial recognition to stop kids from live streaming

Chinese cities are among the most digitally savvy and cashless in the world, with about 583 million people using their smartphones to make payment in China last year, according to the China Internet Network Information Center. Nearly 68 per cent of China’s internet users used a mobile wallet for their offline payments.

Chinese schools enforce ‘smart uniforms’ with GPS tracking system to monitor students

Chinese schools have begun enforcing “smart uniforms” embedded with computer chips to monitor student movements and prevent them from skipping classes.

Eleven schools in the south-west province of Guizhou have introduced the uniforms, which were developed by local tech firm Guizhou Guanyu Technology.

As students enter the school, the time and date is recorded along with a short video that parents can access via a mobile app.

Facial recognition further ensures that each uniform is worn by its rightful owner to prevent students from cheating the system.

Skipping classes triggers an alarm to inform teachers and parents of the truancy, while an automatic voice alarm activates if a student walks out of school without permission.

A GPS system tracks student movements even beyond the school grounds.

The two chips — inserted into each uniform’s shoulders — can withstand up to 500 washes and 150 degrees Celsius, the company told state media Global Times.

Alarms will also sound if a student falls asleep in class, while parents can monitor purchases their child makes at the school and set spending limits via a mobile app, according to the company’s official website.

Facebook Filed A Patent To Predict Your Household’s Demographics Based On Family Photos

Facebook has submitted a patent application for technology that would predict who your family and other household members are, based on images and captions posted to Facebook, as well as your device information, like shared IP addresses. The application, titled “Predicting household demographics based on image data,” was originally filed May 10, 2017, and made public today.

The system Facebook proposes in its patent application would use facial recognition and learning models trained to understand text to help Facebook better understand whom you live with and interact with most. The technology described in the patent looks for clues in your profile pictures on Facebook and Instagram, as well as photos of you that you or your friends post.

It would note the people identified in a photo, and how frequently the people are included in your pictures. Then, it would assess information from comments on the photos, captions, or tags (#family, #mom, #kids) — anything that indicates whether someone is a husband, daughter, cousin, etc. — to predict what your family/household actually looks like. According to the patent application, Facebook’s prediction models would also analyze “messaging history, past tagging history, [and] web browsing history” to see if multiple people share IP addresses (a unique identifier for every internet network).

Australia’s near-real-time facial recognition system, chilling effects

Civil rights groups have warned a vast, powerful system allowing the near real-time matching of citizens’ facial images risks a “profound chilling effect” on protest and dissent.

The technology – known in shorthand as “the capability” – collects and pools facial imagery from various state and federal government sources, including driver’s licences, passports and visas.

The biometric information can then rapidly – almost in real time – be compared with other sources, such as CCTV footage, to match identities.

The system, chiefly controlled by the federal Department of Home Affairs, is designed to give intelligence and security agencies a powerful tool to deter identity crime, and quickly identify terror and crime suspects.

But it has prompted serious concern among academics, human rights groups and privacy experts. The system sweeps up and processes citizens’ sensitive biometric information regardless of whether they have committed or are suspected of an offence.

Chinese ‘Gait Recognition’ Tech IDs People By How They Walk; Police Have Started Using It on Streets of Beijing and Shanghai

Already used by police on the streets of Beijing and Shanghai, “gait recognition” is part of a push across China to develop artificial-intelligence and data-driven surveillance that is raising concern about how far the technology will go. Huang Yongzhen, the CEO of Watrix, said that its system can identify people from up to 50 meters (165 feet) away, even with their back turned or face covered. This can fill a gap in facial recognition, which needs close-up, high-resolution images of a person’s face to work. “You don’t need people’s cooperation for us to be able to recognize their identity,” Huang said in an interview in his Beijing office. “Gait analysis can’t be fooled by simply limping, walking with splayed feet or hunching over, because we’re analyzing all the features of an entire body.”