Resources

A Researcher Attempted To Opt Out of Facial Recognition at the Airport — It Wasn’t Easy

The announcement came as we began to board. Last month, I was at Detroit’s Metro Airport for a connecting flight to Southeast Asia. I listened as a Delta Air Lines staff member informed passengers that the boarding process would use facial recognition instead of passport scanners. As a privacy-conscious person, I was uncomfortable boarding this way. I also knew I could opt out. Presumably, most of my fellow fliers did not: I didn’t hear a single announcement alerting passengers how to avoid the face scanners.

To figure out how to do so, I had to leave the boarding line, speak with a Delta representative at their information desk, get back in line, then request a passport scan when it was my turn to board. Federal agencies and airlines claim that facial recognition is an opt-out system, but my recent experience suggests they are incentivizing travelers to have their faces scanned — and disincentivizing them to sidestep the tech — by not clearly communicating alternative options. Last year, a Delta customer service representative reported that only 2 percent of customers opt out of facial-recognition. It’s easy to see why.

607

Facial recognition used to identify and catalogue animals

Salmon are just the latest entry in a growing cornucopia of animal faces loaded into databases. For some animals, the biometric data gathered from them is being used to aid in conservation efforts. For others, the resulting AI could help ward off poachers. While partly creepy and partly very cute, monitoring of these animals can both help protect their populations and ensure safe, traceable livestock for developing communities…

U.K. researchers are using online resources like Flickr and Instagram to help build and strengthen a database that will eventually help track global tiger populations in real time. Once collected, the photos are analyzed by everyday people in a free app called Wildsense… The mighty lion is being surveilled too. Conservationists and wildlife teachers are using facial recognition to keep tabs on a database of over 1,000 lions… Wildlife experts are tracking elephants to protect them from encroaching poachers. Using Google’s Cloud AutoML Vision machine learning software, the technology will uniquely identify elephants in the wild. According to the Evening Standard, the tech will even send out an alert if it detects poachers in the same frame.

The story of whale facial tracking is one of crowdsourcing success. After struggling to distinguish specific whales from one another on his own, marine biologist Christian Khan uploaded the photos to data-competition site Kaggle and, within four months, data-science company Deepsense was able to accurately detect individual whale faces with 87% accuracy. Since then, detection rates have steadily improved and are helping conservationists track and monitor the struggling aquatic giant.

U.S. researchers are trying to protect “the world’s most endangered animal” with LemurFaceID, which is able to accurately differentiate between two lemur faces with 97% accuracy. But “In the livestock surveillance arms race China is definitely leading the charge,” the article notes, citing e-commerce giant JD.com and its use of facial recognition to monitor herds of pigs to detect their age, weight, and diet.

And one Chinese company even offers a blockchain-based chicken tracking system (codenamed “GoGo Chicken”) with an app that can link a grocery store chicken to “its birthplace, what food it ate and how many steps it walked during its life.”

614

Artificial intelligence can create a 3D model of a person—from just a few seconds of video

Artificial intelligence has been used to create 3D models of people’s bodies for virtual reality avatars, surveillance, visualizing fashion, or movies. But it typically requires special camera equipment to detect depth or to view someone from multiple angles. A new algorithm creates 3D models using standard video footage from one angle.

The system has three stages. First, it analyzes a video a few seconds long of someone moving—preferably turning 360° to show all sides—and for each frame creates a silhouette separating the person from the background. Based on machine learning techniques—in which computers learn a task from many examples—it roughly estimates the 3D body shape and location of joints. In the second stage, it “unposes” the virtual human created from each frame, making them all stand with arms out in a T shape, and combines information about the T-posed people into one, more accurate model. Finally, in the third stage, it applies color and texture to the model based on recorded hair, clothing, and skin.

The researchers tested the method with a variety of body shapes, clothing, and backgrounds and found that it had an average accuracy within 5 millimeters, they will report in June at the Computer Vision and Pattern Recognition conference in Salt Lake City. The system can also reproduce the folding and wrinkles of fabric, but it struggles with skirts and long hair. With a model of you, the researchers can change your weight, clothing, and pose—and even make you perform a perfect pirouette. No practice necessary.

646