Clearview AI CEO Says ‘Over 2,400 Police Agencies’ Are Using Its Facial Recognition Software

More than 2,400 police agencies have entered contracts with Clearview AI, a controversial facial recognition firm, according to comments made by Clearview AI CEO Hoan Ton-That in an interview with Jason Calacanis on YouTube.

The hour-long interview references an investigation by The New York Times published in January, which detailed how Clearview AI scraped data from sites including Facebook, YouTube, and Venmo to build its database. The scale of that database and the methods used to construct it were already controversial before the summer of protests against police violence. “It’s an honor to be at the center of the debate now and talk about privacy,” Ton-That says in the interview, going on to call the Times investigation “actually extremely fair.” “Since then, there’s been a lot of controversy, but fundamentally, this is such a great tool for society,” Ton-That says.

Ton-That also gave a few more details on how the business runs. Clearview is paid depending on how many licenses a client adds, among other factors, but Ton-That describes the licenses as “pretty inexpensive, compared to what’s come previously” in his interview. Ton-That ballparks Clearview’s fees as $2,000 a year for each officer with access. According to Ton-That, Clearview AI is primarily used by detectives.

Clearview AI was used at least once to identify protesters in Miami.

Facial recognition was also used by the New York Police Department to arrest an activist during the Black Lives Matter uprising this summer. According to a BuzzFeed News report in February, NYPD was at the time the largest user of Clearview AI — where more than 30 officers had Clearview accounts.

621

Police in Several US Cities Used Facial Recognition To Hunt Down and Arrest Protesters

Law enforcement in several cities, including New York and Miami, have reportedly been using controversial facial recognition software to track down and arrest individuals who allegedly participated in criminal activity during Black Lives Matter protests months after the fact. Miami police used Clearview AI to identify and arrest a woman for allegedly throwing a rock at a police officer during a May protest, local NBC affiliate WTVJ reported this week…

Similar reports have surfaced from around the country in recent weeks. Police in Columbia, South Carolina, and the surrounding county likewise used facial recognition, though from a different vendor, to arrest several protesters after the fact, according to local paper The State. Investigators in Philadelphia also used facial recognition software, from a third vendor, to identify protestors from photos posted to Instagram, The Philadelphia Inquirer reported.

598

Emotion Recognition Tech Should Be Banned, Says an AI Research Institute

A leading research centre has called for new laws to restrict the use of emotion-detecting tech. The AI Now Institute says the field is “built on markedly shaky foundations.” Despite this, systems are on sale to help vet job seekers, test criminal suspects for signs of deception, and set insurance prices. It wants such software to be banned from use in important decisions that affect people’s lives and/or determine their access to opportunities. The US-based body has found support in the UK from the founder of a company developing its own emotional-response technologies — but it cautioned that any restrictions would need to be nuanced enough not to hamper all work being done in the area.

AI Now refers to the technology by its formal name, affect recognition, in its annual report. It says the sector is undergoing a period of significant growth and could already be worth as much as $20 billion. “It claims to read, if you will, our inner-emotional states by interpreting the micro-expressions on our face, the tone of our voice or even the way that we walk,” explained co-founder Prof Kate Crawford. “It’s being used everywhere, from how do you hire the perfect employee through to assessing patient pain, through to tracking which students seem to be paying attention in class. “At the same time as these technologies are being rolled out, large numbers of studies are showing that there is… no substantial evidence that people have this consistent relationship between the emotion that you are feeling and the way that your face looks.”

684

United States’ Department of Homeland Security Will Soon Have Biometric Data On Nearly 260 Million People

The U.S. Department of Homeland Security (DHS) expects to have face, fingerprint, and iris scans of at least 259 million people in its biometrics database by 2022, according to a recent presentation from the agency’s Office of Procurement Operations reviewed by Quartz. That’s about 40 million more than the agency’s 2017 projections, which estimated 220 million unique identities by 2022, according to previous figures cited by the Electronic Frontier Foundation (EFF), a San Francisco-based privacy rights nonprofit.

A slide deck, shared with attendees at an Oct. 30 DHS industry day, includes a breakdown of what its systems currently contain, as well as an estimate of what the next few years will bring. The agency is transitioning from a legacy system called IDENT to a cloud-based system (hosted by Amazon Web Services) known as Homeland Advanced Recognition Technology, or HART. The biometrics collection maintained by DHS is the world’s second-largest, behind only India’s countrywide biometric ID network in size. The traveler data kept by DHS is shared with other U.S. agencies, state and local law enforcement, as well as foreign governments.

719

Vimeo Sued For Storing Faceprints of People Without Their Consent

Vimeo is collecting and storing thousands of people’s facial biometrics without their permission or knowledge, according to a complaint filed on September 20 on behalf of potentially thousands of plaintiffs under the Illinois Biometric Information Privacy Act (BIPA).

The suit takes aim at Vimeo’s Magisto application: a short-form video creation platform purchased by Vimeo in April 2019 that uses facial recognition to automatically index the faces of people in videos so they can be face-tagged. BIPA bans collecting and storing biometric data without explicit consent, including “faceprints.” The complaint against Vimeo claims that users of Magisto “upload millions of videos and/or photos per day, making videos and photographs a vital part of the Magisto experience.”

The complaint maintains that unbeknownst to the average consumer, Magisto scans “each and every video and photo uploaded to Magisto for faces” and analyzes “biometric identifiers,” including facial geometry, to “create and store a template for each face.” That template is later used to “organize and group together videos based upon the particular individuals appearing in the videos” by “comparing the face templates of individuals who appear in newly-edited videos or photos with the facial templates already saved in Magisto’s face database.”

The complaint also asserts that Magisto analyzes and face-matches the biometrics of non-Magisto users who happen to appear in the photos and videos, which is a violation of BIPA.

713

A Researcher Attempted To Opt Out of Facial Recognition at the Airport — It Wasn’t Easy

The announcement came as we began to board. Last month, I was at Detroit’s Metro Airport for a connecting flight to Southeast Asia. I listened as a Delta Air Lines staff member informed passengers that the boarding process would use facial recognition instead of passport scanners. As a privacy-conscious person, I was uncomfortable boarding this way. I also knew I could opt out. Presumably, most of my fellow fliers did not: I didn’t hear a single announcement alerting passengers how to avoid the face scanners.

To figure out how to do so, I had to leave the boarding line, speak with a Delta representative at their information desk, get back in line, then request a passport scan when it was my turn to board. Federal agencies and airlines claim that facial recognition is an opt-out system, but my recent experience suggests they are incentivizing travelers to have their faces scanned — and disincentivizing them to sidestep the tech — by not clearly communicating alternative options. Last year, a Delta customer service representative reported that only 2 percent of customers opt out of facial-recognition. It’s easy to see why.

807

Police using Google Images + Facial Recognition

“The New York Police Department used a photo of Woody Harrelson in its facial recognition program in an attempt to identify a beer thief who looked like the actor,” reports the Associated Press:

Georgetown University’s Center on Privacy and Technology highlighted the April 2017 episode in “Garbage In, Garbage Out,” a report on what it says are flawed practices in law enforcement’s use of facial recognition. The report says security footage of the thief was too pixelated and produced no matches while high-quality images of Harrelson, a three-time Oscar nominee, returned several possible matches and led to one arrest.

The NYPD also used a photo of a New York Knicks player to search its database for a man wanted for a Brooklyn assault, the report said.

“The stakes are too high in criminal investigations to rely on unreliable â” or wrong â” inputs,” Georgetown researcher Clare Garvie wrote…. The Georgetown report says facial recognition has helped the NYPD crack about 2,900 cases in more than five years of using the technology.

And in Florida, Vice reports, law enforcement agencies “run roughly 8,000 of these searches per month.”

758

Facial Recognition to board a plane

A boarding technology for travelers using JetBlue is causing controversy due to a social media thread on the airline’s use of facial recognition. Last week, traveler MacKenzie Fegan described her experience with the biometric technology in a social media post that got the attention of JetBlue’s official account. She began: “I just boarded an international @JetBlue flight. Instead of scanning my boarding pass or handing over my passport, I looked into a camera before being allowed down the jet bridge. Did facial recognition replace boarding passes, unbeknownst to me? Did I consent to this?” JetBlue was ready to offer Twitterized sympathy: “You’re able to opt out of this procedure, MacKenzie. Sorry if this made you feel uncomfortable.”

But once you start thinking about these things, your thoughts become darker. Fegan wanted to know how JetBlue knew what she looked like. JetBlue explained: “The information is provided by the United States Department of Homeland Security from existing holdings.” Fegan wondered by what right a private company suddenly had her bioemtric data. JetBlue insisted it doesn’t have access to the data. It’s “securely transmitted to the Customs and Border Protection database.” Fegan wanted to know how this could have possibly happened so quickly. Could it be that in just a few seconds her biometric data was whipped “securely” around government departments so that she would be allowed on the plane? JetBlue referred her to an article on the subject, which was a touch on the happy-PR side. Fegan was moved, but not positively, by the phrase “there is no pre-registration required.”

802

Microsoft Turned Down Facial-Recognition Sales over “Human Rights Concerns”

Microsoft recently rejected a California law enforcement agency’s request to install facial recognition technology in officers’ cars and body cameras due to human rights concerns, company President Brad Smith said on Tuesday. Microsoft concluded it would lead to innocent women and minorities being disproportionately held for questioning because the artificial intelligence has been trained on mostly white and male pictures. AI has more cases of mistaken identity with women and minorities, multiple research projects have found.

Smith explained the decisions as part of a commitment to human rights that he said was increasingly critical as rapid technological advances empower governments to conduct blanket surveillance, deploy autonomous weapons and take other steps that might prove impossible to reverse. Smith also said at a Stanford University conference that Microsoft had declined a deal to install facial recognition on cameras blanketing the capital city of an unnamed country that the nonprofit Freedom House had deemed not free. Smith said it would have suppressed freedom of assembly there.

On the other hand, Microsoft did agree to provide the technology to an American prison, after the company concluded that the environment would be limited and that it would improve safety inside the unnamed institution.

826

You Will Soon Be Able To Pay Your Subway Fare With Your Face in China

China has led the world in adoption of smartphone-based mobile payments to the point where the central bank had to remind merchants not to discriminate against cash. The next phase of development may be to pay with your face.

In Shenzhen, the local subway operator is testing various advanced technologies backed by the ultra-fast 5G network, including facial-recognition ticketing.

At the Futian station, instead of presenting a ticket or scanning a QR bar code on their smartphones, commuters can scan their faces on a tablet-sized screen mounted on the entrance gate and have the fare automatically deducted from their linked accounts.

Currently in a trial mode, the facial-recognition ticketing service could in future help improve the efficiency of handling the up to 5 million rides per day on the city’s subway network. Shenzhen Metro did not elaborate when it will roll out the facial payment service.

The introduction of facial recognition-and-payment services to the public transit system marks another step by China toward integrating facial recognition and other artificial intelligence-based technology into everyday life in the world’s most populous nation.

Consumers can already pay for fried chicken at KFC in China with its “Smile to Pay” facial recognition system, first introduced at an outlet in Hangzhou in January 2017.

“To use facial ticketing in the future, passengers will also need preregistration of their facial information and link their payment methods to their accounts, just like them making payments at the KFC restaurant,” said a staff member at the Futian station’s demonstration area in Shenzhen.
China may use facial recognition to stop kids from live streaming

Chinese cities are among the most digitally savvy and cashless in the world, with about 583 million people using their smartphones to make payment in China last year, according to the China Internet Network Information Center. Nearly 68 per cent of China’s internet users used a mobile wallet for their offline payments.

745

Chinese schools enforce ‘smart uniforms’ with GPS tracking system to monitor students

Chinese schools have begun enforcing “smart uniforms” embedded with computer chips to monitor student movements and prevent them from skipping classes.

Eleven schools in the south-west province of Guizhou have introduced the uniforms, which were developed by local tech firm Guizhou Guanyu Technology.

As students enter the school, the time and date is recorded along with a short video that parents can access via a mobile app.

Facial recognition further ensures that each uniform is worn by its rightful owner to prevent students from cheating the system.

Skipping classes triggers an alarm to inform teachers and parents of the truancy, while an automatic voice alarm activates if a student walks out of school without permission.

A GPS system tracks student movements even beyond the school grounds.

The two chips — inserted into each uniform’s shoulders — can withstand up to 500 washes and 150 degrees Celsius, the company told state media Global Times.

Alarms will also sound if a student falls asleep in class, while parents can monitor purchases their child makes at the school and set spending limits via a mobile app, according to the company’s official website.

910

Facebook Filed A Patent To Predict Your Household’s Demographics Based On Family Photos

Facebook has submitted a patent application for technology that would predict who your family and other household members are, based on images and captions posted to Facebook, as well as your device information, like shared IP addresses. The application, titled “Predicting household demographics based on image data,” was originally filed May 10, 2017, and made public today.

The system Facebook proposes in its patent application would use facial recognition and learning models trained to understand text to help Facebook better understand whom you live with and interact with most. The technology described in the patent looks for clues in your profile pictures on Facebook and Instagram, as well as photos of you that you or your friends post.

It would note the people identified in a photo, and how frequently the people are included in your pictures. Then, it would assess information from comments on the photos, captions, or tags (#family, #mom, #kids) — anything that indicates whether someone is a husband, daughter, cousin, etc. — to predict what your family/household actually looks like. According to the patent application, Facebook’s prediction models would also analyze “messaging history, past tagging history, [and] web browsing history” to see if multiple people share IP addresses (a unique identifier for every internet network).

809

Australia’s near-real-time facial recognition system, chilling effects

Civil rights groups have warned a vast, powerful system allowing the near real-time matching of citizens’ facial images risks a “profound chilling effect” on protest and dissent.

The technology – known in shorthand as “the capability” – collects and pools facial imagery from various state and federal government sources, including driver’s licences, passports and visas.

The biometric information can then rapidly – almost in real time – be compared with other sources, such as CCTV footage, to match identities.

The system, chiefly controlled by the federal Department of Home Affairs, is designed to give intelligence and security agencies a powerful tool to deter identity crime, and quickly identify terror and crime suspects.

But it has prompted serious concern among academics, human rights groups and privacy experts. The system sweeps up and processes citizens’ sensitive biometric information regardless of whether they have committed or are suspected of an offence.

822

Chinese ‘Gait Recognition’ Tech IDs People By How They Walk; Police Have Started Using It on Streets of Beijing and Shanghai

Already used by police on the streets of Beijing and Shanghai, “gait recognition” is part of a push across China to develop artificial-intelligence and data-driven surveillance that is raising concern about how far the technology will go. Huang Yongzhen, the CEO of Watrix, said that its system can identify people from up to 50 meters (165 feet) away, even with their back turned or face covered. This can fill a gap in facial recognition, which needs close-up, high-resolution images of a person’s face to work. “You don’t need people’s cooperation for us to be able to recognize their identity,” Huang said in an interview in his Beijing office. “Gait analysis can’t be fooled by simply limping, walking with splayed feet or hunching over, because we’re analyzing all the features of an entire body.”

851

With 5G, you won’t just be watching video. It’ll be watching you, too

What happens when movies can direct themselves? Remember the last time you felt terrified during a horror movie? Take that moment, and all the suspense leading up to it, and imagine it individually calibrated for you. It’s a terror plot morphing in real time, adjusting the story to your level of attention to lull you into a comfort zone before unleashing a personally timed jumpscare.

Or maybe being scared witless isn’t your idea of fun. Think of a rom-com that stops from going off the rails when it sees you rolling your eyes. Or maybe it tweaks the eye color of that character finally finding true love so it’s closer to your own, a personalized subtlety to make the love-struck protagonist more relatable.

You can thank (or curse) 5G for that.

When most people think of 5G, they’re envisioning an ultra-fast, high-bandwidth connection that lets you download seasons of your favorite shows in minutes. But 5G’s possibilities go way beyond that, potentially reinventing how we watch video, and opening up a mess of privacy uncertainties.

“Right now you make a video much the same way you did for TV,” Dan Garraway, co-founder of interactive video company Wirewax, said in an interview this month. “The dramatic thing is when you turn video into a two-way conversation. Your audience is touching and interacting inside the experience and making things happen as a result.” The personalized horror flick or tailored rom-com? They would hinge on interactive video layers that use emotional analysis based on your phone’s front-facing camera to adjust what you’re watching in real time. You may think it’s far-fetched, but one of key traits of 5G is an ultra-responsive connection with virtually no lag, meaning the network and systems would be fast enough to react to your physical responses.

Before you cast a skeptical eye at 5G, consider how the last explosion of mobile connectivity, from 3G to 4G LTE, changed how we consumed video. Being able to watch — and in YouTube’s case, upload — video on a mobile device reimagined how we watch TV and the types of programming that are big business. A decade ago, when Netflix was about two years into its transition to streaming from DVD mailings, its annual revenue $1.4 billion. This year it’s on track for more than 10 times that ($15.806 billion).

5G’s mobility can bring video experiences to new locations. Spare gives the example straight out of Minority Report, of entering a Gap retail store and being greeted by name. But taken further, the store could develop a three-dimensional video concierge for your phone — a pseudo-hologram that helps you find what you’re looking for. With 5G’s ability to make virtual and augmented reality more accessible, you could get a snapshot of what an outfit might look like on you without having to try it on.

Where things get crazy — and creepy — is imagining how 5G enables video to react to your involuntary cues and all the data you unconsciously provide. A show could mimic the weather or time of day to more closely match the atmosphere in real life.

For all the eye-popping possibilities, 5G unleashes a tangle of privacy questions. 5G could leverage every piece of visual information a phone can see on cameras front and back in real time. This level of visual imagery collection could pave the way for video interaction to happen completely automatically.

It’s also a potential privacy nightmare. But the lure of billions of dollars have already encouraged companies to make privacy compromises.

And that may make it feel like your personalized horror show is already here.

850

Facial recognition used to identify and catalogue animals

Salmon are just the latest entry in a growing cornucopia of animal faces loaded into databases. For some animals, the biometric data gathered from them is being used to aid in conservation efforts. For others, the resulting AI could help ward off poachers. While partly creepy and partly very cute, monitoring of these animals can both help protect their populations and ensure safe, traceable livestock for developing communities…

U.K. researchers are using online resources like Flickr and Instagram to help build and strengthen a database that will eventually help track global tiger populations in real time. Once collected, the photos are analyzed by everyday people in a free app called Wildsense… The mighty lion is being surveilled too. Conservationists and wildlife teachers are using facial recognition to keep tabs on a database of over 1,000 lions… Wildlife experts are tracking elephants to protect them from encroaching poachers. Using Google’s Cloud AutoML Vision machine learning software, the technology will uniquely identify elephants in the wild. According to the Evening Standard, the tech will even send out an alert if it detects poachers in the same frame.

The story of whale facial tracking is one of crowdsourcing success. After struggling to distinguish specific whales from one another on his own, marine biologist Christian Khan uploaded the photos to data-competition site Kaggle and, within four months, data-science company Deepsense was able to accurately detect individual whale faces with 87% accuracy. Since then, detection rates have steadily improved and are helping conservationists track and monitor the struggling aquatic giant.

U.S. researchers are trying to protect “the world’s most endangered animal” with LemurFaceID, which is able to accurately differentiate between two lemur faces with 97% accuracy. But “In the livestock surveillance arms race China is definitely leading the charge,” the article notes, citing e-commerce giant JD.com and its use of facial recognition to monitor herds of pigs to detect their age, weight, and diet.

And one Chinese company even offers a blockchain-based chicken tracking system (codenamed “GoGo Chicken”) with an app that can link a grocery store chicken to “its birthplace, what food it ate and how many steps it walked during its life.”

769

UK Police Plan To Deploy ‘Staggeringly Inaccurate’ Facial Recognition in London

Millions of people face the prospect of being scanned by police facial recognition technology that has sparked human rights concerns. The controversial software, which officers use to identify suspects, has been found to be “staggeringly inaccurate”, while campaigners have branded its use a violation of privacy. But Britain’s largest police force is set to expand a trial across six locations in London over the coming months.

Police leaders claimed officers make the decision to act on potential matches with police records and images that do not spark an alert are immediately deleted. But last month The Independent revealed the Metropolitan Police’s software was returning “false positives” — images of people who were not on a police database — in 98 percent of alerts… Detective Superintendent Bernie Galopin, the lead on facial recognition for London’s Metropolitan Police, said the operation was targeting wanted suspects to help reduce violent crime and make the area safer. “It allows us to deal with persons that are wanted by police where traditional methods may have failed,” he told The Independent, after statistics showed police were failing to solve 63 per cent of knife crimes committed against under-25s….

Det Supt Galopin said the Met was assessing how effective facial recognition was at tackling different challenges in British policing, which is currently being stretched by budget cuts, falling officer numbers, rising demand and the terror threat.

A policy officer from the National Council for Civil Liberties called the technology “lawless,” adding “the use of this technology in a public place is not compatible with privacy, and has a chilling effect on society.”

But a Home Office minister said the technology was vital for protecting people from terrorism, though “we must ensure that privacy is respected. This strategy makes clear that we will grasp the opportunities that technology brings while remaining committed to strengthening safeguards.”

909

High School in China Installs Facial Recognition Cameras to Monitor Students’ Attentiveness

A high school in Hangzhou City, Zhejiang Province located on the eastern coast of China, has employed facial recognition technology to monitor students’ attentiveness in class.

At Hangzhou Number 11 High School, three cameras at the front of the classroom scan students’ faces every 30 seconds, analyzing their facial expressions to detect their mood, according to a May 16 report in the state-run newspaper The Paper.

The different moods—surprised, sad, antipathy, angry, happy, afraid, neutral—are recorded and averaged during each class.

A display screen, only visible to the teacher, shows the data in real-time. A certain value is determined as a student not paying enough attention.

A video shot by Zhejiang Daily Press revealed that the system—coined the “smart classroom behavior management system” by the school—also analyzes students’ actions, categorized into: reading, listening, writing, standing up, raising hands, and leaning on the desk.

An electronic screen also displays a list of student names deemed “not paying attention.”

The school began using the technology at the end of March, vice principal Zhang Guanchao told The Paper. Zhang added that students felt like they were being monitored when the system was first put in place, but have since gotten used to it.

939

London cops are using an unregulated, 98% inaccurate facial recognition tech

The London Metropolitan Police use a facial recognition system whose alerts have a 98% false positive rate; people falsely identified by the system are stopped, questioned and treated with suspicion.

The UK has a biometrics commissioner, Professor Paul Wiles, who laments the lack of any regulation of this technology, calling it “urgently needed”; these regulations are long promised, incredibly overdue, and the Home Office admits that they’re likely to be delayed beyond their revised June publication date.

The Met say that they don’t “arrest” people who are erroneously identified by the system. Rather, they “detain” them by refusing to allow them to leave and subjecting them to searches, etc.

Incredibly, the Met’s system is even worse than the South Wales Police’s facial recognition system, which has a comparatively impressive 92% failure rate.

823

Facebook silently enables facial recognition abilities for users outside EU and Canada

Facebook is now informing users around the world that it’s rolling out facial recognition features. In December, we reported the features would be coming to the platform; that roll out finally appears to have begun. It should be noted that users in the European Union and Canada will not be notified because laws restrict this type of activity in those areas.

With the new tools, you’ll be able to find photos that you’re in but haven’t been tagged in; they’ll help you protect yourself against strangers using your photo; and Facebook will be able to tell people with visual impairments who’s in their photos and videos. By default, Facebook warns that this feature is enabled but can be switched off at any time; additionally, the firm says it may add new capabilities at any time.

While Facebook may want its users to “feel confident” uploading pictures online, it will likely give many other users the heebie-jeebies when they think of the colossal database of faces that Facebook has and what it could do with all that data. Even non-users should be cautious which photos they include themselves in if they don’t want to be caught up in Facebook’s web of data.

922