Resources

Phones Can Now Tell Who Is Carrying Them From Their Users’ Gaits

Most online fraud involves identity theft, which is why businesses that operate on the web have a keen interest in distinguishing impersonators from genuine customers. Passwords help. But many can be guessed or are jotted down imprudently. Newer phones, tablets, and laptop and desktop computers often have beefed-up security with fingerprint and facial recognition. But these can be spoofed. To overcome these shortcomings the next level of security is likely to identify people using things which are harder to copy, such as the way they walk. Many online security services already use a system called device fingerprinting. This employs software to note things like the model type of a gadget employed by a particular user; its hardware configuration; its operating system; the apps which have been downloaded onto it; and other features, including sometimes the Wi-Fi networks it regularly connects through and devices like headsets it plugs into.

LexisNexis Risk Solutions, an American analytics firm, has catalogued more than 4 billion phones, tablets and other computers in this way for banks and other clients. Roughly 7% of them have been used for shenanigans of some sort. But device fingerprinting is becoming less useful. Apple, Google and other makers of equipment and operating systems have been steadily restricting the range of attributes that can be observed remotely. That is why a new approach, behavioral biometrics, is gaining ground. It relies on the wealth of measurements made by today’s devices. These include data from accelerometers and gyroscopic sensors, that reveal how people hold their phones when using them, how they carry them and even the way they walk. Touchscreens, keyboards and mice can be monitored to show the distinctive ways in which someone’s fingers and hands move. Sensors can detect whether a phone has been set down on a hard surface such as a table or dropped lightly on a soft one such as a bed. If the hour is appropriate, this action could be used to assume when a user has retired for the night. These traits can then be used to determine whether someone attempting to make a transaction is likely to be the device’s habitual user.

If used wisely, the report says behavioral biometrics could be used to authenticate account-holders without badgering them for additional passwords or security questions; it could even be used for unlocking the doors of a vehicle once the gait of the driver, as measured by his phone, is recognized, for example.

“Used unwisely, however, the system could become yet another electronic spy, permitting complete strangers to monitor your actions, from the moment you reach for your phone in the morning, to when you fling it on the floor at night,” the report adds.

Thanks To Facebook, Your Cellphone Company Is Watching You More Closely Than Ever

A confidential Facebook document reviewed by The Intercept shows that Facebook courts carriers, along with phone makers — some 100 different companies in 50 countries — by offering the use of even more surveillance data, pulled straight from your smartphone by Facebook itself.

Offered to select Facebook partners, the data includes not just technical information about Facebook members’ devices and use of Wi-Fi and cellular networks, but also their past locations, interests, and even their social groups. This data is sourced not just from the company’s main iOS and Android apps, but from Instagram and Messenger as well. The data has been used by Facebook partners to assess their standing against competitors, including customers lost to and won from them, but also for more controversial uses like racially targeted ads.

Some experts are particularly alarmed that Facebook has marketed the use of the information — and appears to have helped directly facilitate its use, along with other Facebook data — for the purpose of screening customers on the basis of likely creditworthiness. Such use could potentially run afoul of federal law, which tightly governs credit assessments. Facebook said it does not provide creditworthiness services and that the data it provides to cellphone carriers and makers does not go beyond what it was already collecting for other uses.

Police using Google Images + Facial Recognition

“The New York Police Department used a photo of Woody Harrelson in its facial recognition program in an attempt to identify a beer thief who looked like the actor,” reports the Associated Press:

Georgetown University’s Center on Privacy and Technology highlighted the April 2017 episode in “Garbage In, Garbage Out,” a report on what it says are flawed practices in law enforcement’s use of facial recognition. The report says security footage of the thief was too pixelated and produced no matches while high-quality images of Harrelson, a three-time Oscar nominee, returned several possible matches and led to one arrest.

The NYPD also used a photo of a New York Knicks player to search its database for a man wanted for a Brooklyn assault, the report said.

“The stakes are too high in criminal investigations to rely on unreliable â” or wrong â” inputs,” Georgetown researcher Clare Garvie wrote…. The Georgetown report says facial recognition has helped the NYPD crack about 2,900 cases in more than five years of using the technology.

And in Florida, Vice reports, law enforcement agencies “run roughly 8,000 of these searches per month.”

Google Uses Gmail To Track a History of Things You Buy — and It’s Hard To Delete

CNBC’s Todd Haselton has discovered that Google saves years of information on the purchases you’ve made, even outside Google, and pulls this information from Gmail.

A page called “Purchases” shows an accurate list of many — though not all — of the things I’ve bought dating back to at least 2012. I made these purchases using online services or apps such as Amazon, DoorDash or Seamless, or in stores such as Macy’s, but never directly through Google. But because the digital receipts went to my Gmail account, Google has a list of info about my buying habits. Google even knows about things I long forgot I’d purchased, like dress shoes I bought inside a Macy’s store on Sept. 14, 2015.

But there isn’t an easy way to remove all of this. You can delete all the receipts in your Gmail inbox and archived messages. But, if you’re like me, you might save receipts in Gmail in case you need them later for returns. There is no way to delete them from Purchases without also deleting them from Gmail — when you click on the “Delete” option in Purchases, it simply guides you back to the Gmail message. Google’s privacy page says that only you can view your purchases. But it says “Information about your orders may also be saved with your activity in other Google services ” and that you can see and delete this information on a separate “My Activity” page. Except you can’t. Google’s activity controls page doesn’t give you any ability to manage the data it stores on Purchases.

Google says you can turn off the tracking entirely, but when CNBC tried this, it didn’t work.

Facebook Contractors Categorize Your Private Posts To Train AI

Facebook uses thousands of third-party staffers around the world to look at Facebook and Instagram posts to help train its AI and to inform new products. “But because the contractors see users’ public and private posts, some view it as a violation of privacy.”

According to Reuters, as many as 260 contract workers in Hyderabad, India have spent more than a year labeling millions of Facebook posts dating back to 2014. They look for the subject of the post, the occasion and the author’s intent, and Facebook told Reuters, the company uses that information to develop new features and to potentially increase usage and ad revenue. Around the globe, Facebook has as many as 200 similar content labeling projects, many of which are used to train the company’s AI.

The contractors working in Hyderabad told Reuters they see everything from text-based status updates to videos, photos and Stories across Facebook and Instagram — including those that are shared privately. And even as Facebook embarks on its “the future is private” platform, one Facebook employee told Reuters he can’t imagine the practice going away. It’s a core part of training AI and developing the company’s products.

How Amazon’s Facial-Recognition Technology is Supercharging Local Police

Deputies in this corner of western Oregon outside ultraliberal Portland used to track down criminals the old-fashioned way, faxing caught-on-camera images of a suspect around the office in hope that someone might recognize the face. Then, in late 2017, the Washington County Sheriff’s Office became the first law enforcement agency in the country known to use Amazon’s artificial-intelligence tool Rekognition, transforming this thicket of forests and suburbs into a public testing ground for a new wave of experimental police surveillance techniques. Almost overnight, deputies saw their investigative powers supercharged, allowing them to scan for matches of a suspect’s face across more than 300,000 mug shots taken at the county jail since 2001. A grainy picture of someone’s face — captured by a security camera, a social media account or a deputy’s smartphone — can quickly become a link to their identity, including their name, family and address.

More than 1,000 facial-recognition searches were logged last year, said deputies, who sometimes used the results to find a suspect’s Facebook page or visit their home… “Just like any of our investigative techniques, we don’t tell people how we catch them,” said Robert Rookhuyzen, a detective on the agency’s major crimes team who said he has run “several dozen” searches and found it helpful about 75% of the time. “We want them to keep guessing…

But lawyers in Oregon said the technology should not be, as many see it, an imminent step forward for the future of policing, and they frame the system not as a technical milestone but a moral one: Is it OK to nab more bad guys if more good guys might get arrested, too? “People love to always say, âHey, if it’s catching bad people, great, who cares,’ ” said Joshua Crowther, a chief deputy defender in Oregon, “until they’re on the other end.”

How Much Do Amazon Deliveries Contribute To Global Warming?

It’s no coincidence that the number of trucks on the road has doubled since Amazon launched in 1994. That’s a huge deal for the climate, as Vox reported last year in an article on the environmental impact of online shopping: “In 2016, transportation overtook power plants as the top producer of carbon dioxide emissions in the US for the first time since 1979. Nearly a quarter of the transportation footprint comes from medium- and heavy-duty trucks. And increasingly the impact is coming in what people in the world of supply-chain logistics call ‘the last mile,’ meaning the final stretch from a distribution center to a package’s destination. (The ‘last mile’ can in truth be a dozen miles or more.)”

The good news is that e-commerce has the potential to be less carbon-intensive than brick-and-mortar retail. As Anne Goodchild, director of the University of Washington’s Supply Chain Transportation and Logistics Center, told BuzzFeed News, delivery trucks emit “between 20% and 75% less carbon dioxide per customer on average than passenger vehicles driving to [stores].” But that’s only if online stores choose the delivery times themselves. That way, they can pack trucks full of goods and optimize their routes. “When customers choose,” Goodchild noted, “the carbon savings are significantly smaller.”

Thus, Amazon could significantly cut its carbon footprint by prioritizing shipping optimization over consumer convenience…. Americans will have to begin thinking of Amazon.com and other e-commerce sites not as on-demand delivery services for every little thing, but stores that require just as much forethought as a trip to the mall did twenty years ago. And that might be too much to ask of the average consumer in the digital age. In which case, the government might have to step in.

Amazon’s biggest carbon impact comes from its AWS cloud servers, though by the end of 2018 they’d already converted 50% of that to renewable energy, according to the article. And more green efforts may be coming. “For the past eight years, Bezos has ignored requests from the nonprofit Carbon Disclosure Project, which collects the carbon footprint data of large corporations. But last month, he agreed to eventually make the company’s emissions data public. It’s expected later this year.”

The article also raises the possibility of a future with delivery drones powered by renewable energy. But it adds tht until that day comes, expecting deliveries within 48 hours “is incompatible with solving global warming.”

Uber Admits It Wants To Take Riders Away From Public Transit

“Uber took down the taxi industry and now it wants a piece of public transit,” reports CNN, in an article shared by dryriver:
For years, as it aggressively entered new markets, Uber has maintained that it is a complement and ally of public transit. But that messaging changed earlier this month, when Uber released its S-1 ahead of its upcoming initial public offering. In the regulatory filing, Uber said its growth depends on better competing with public transportation, which it identifies as a $1 trillion market it can grab a share of over the long-term. Uber, which lost $1.8 billion in 2018, said it offers incentives to drivers to scale up its network to attract riders away from personal vehicles and public transportation.

Transportation experts say that if Uber grabs a big chunk of its target market — 4.4 trillion passenger miles on public transportation in the 63 countries in which it operates — cities would grind to a halt, as there would literally be no space to move on streets….

Uber’s rival Lyft didn’t describe public transportation as a competitor in its S-1. But while the corporate mission may be different, in practice there’s little difference, experts say.

“Try to imagine the island of Manhattan, and everyone taking the subway being in a rideshare. It just doesn’t function….” said Christof Spieler, who teaches transportation at Rice University and wrote the book Trains, Buses, People. “It’s a world in which large cities essentially break down.”

And transportation consultant Jarrett Walker tells CNN that while it may make business sense for Uber and Lyft to pursue this strategy, “it may also be a strategy that’s destroying the world.”

‘They’re Basically Lying’ – Mental Health Apps Caught Secretly Sharing Data

“Free apps marketed to people with depression or who want to quit smoking are hemorrhaging user data to third parties like Facebook and Google — but often don’t admit it in their privacy policies, a new study reports…” writes The Verge.

“You don’t have to be a user of Facebook’s or Google’s services for them to have enough breadcrumbs to ID you,” warns Slashdot schwit1. From the article:
By intercepting the data transmissions, they discovered that 92 percent of the 36 apps shared the data with at least one third party — mostly Facebook- and Google-run services that help with marketing, advertising, or data analytics. (Facebook and Google did not immediately respond to requests for comment.) But about half of those apps didn’t disclose that third-party data sharing, for a few different reasons: nine apps didn’t have a privacy policy at all; five apps did but didn’t say the data would be shared this way; and three apps actively said that this kind of data sharing wouldn’t happen. Those last three are the ones that stood out to Steven Chan, a physician at Veterans Affairs Palo Alto Health Care System, who has collaborated with Torous in the past but wasn’t involved in the new study. “They’re basically lying,” he says of the apps.

Part of the problem is the business model for free apps, the study authors write: since insurance might not pay for an app that helps users quit smoking, for example, the only ways for free app developer to stay afloat is to either sell subscriptions or sell data. And if that app is branded as a wellness tool, the developers can skirt laws intended to keep medical information private.

A few apps even shared what The Verge calls “very sensitive information” like self reports about substance use and user names.

10% of Twitter Users Create 80% of the Tweets

In the United States, 10% of Twitter’s users create 80% of its tweets, according to a new study by the Pew Research Center:
The median user in the top 10% by tweet volume creates 138 tweets per month, “favorites” 70 posts per month, follows 456 accounts, and has 387 followers. By comparison, the median user in the bottom 90% of tweeters creates just two tweets per month, “favorites” one post per month, follows 74 accounts, and has 19 followers. And when asked to report how often they use the platform, fully 81% of these highly active tweeters say they do so every day; 47% of other Twitter users visit the platform with this regularity…

Twitter users also tend to have higher levels of household income and educational attainment relative to the general adult population. Some 42% of adult Twitter users have at least a bachelor’s degree — 11 percentage points higher than the overall share of the public with this level of education (31%). Similarly, the number of adult Twitter users reporting a household income above $75,000 is 9 points greater than the same figure in the general population: 41% vs. 32%.

Hacker Can Monitor Cars And Kill Their Engines After Breaking Into GPS Tracking Apps

A hacker broke into thousands of accounts belonging to users of two GPS tracker apps, giving him the ability to monitor the locations of tens of thousands of vehicles and even turn off the engines for some of them while they were in motion. The hacker, who goes by the name L&M, told Motherboard he hacked into more than 7,000 iTrack accounts and more than 20,000 ProTrack accounts, two apps that companies use monitor and manage fleets of vehicles through GPS tracking devices. The hacker was able to track vehicles in a handful of countries around the world, including South Africa, Morocco, India, and the Philippines. On some cars, the software has the capability of remotely turning off the engines of vehicles that are stopped or are traveling 12 miles per hour or slower, according to the manufacturer of certain GPS tracking devices.

By reverse engineering ProTrack and iTrack’s Android apps, L&M said he realized that all customers are given a default password of 123456 when they sign up. At that point, the hacker said he brute-forced ‘millions of usernames’ via the apps’ API. Then, he said he wrote a script to attempt to login using those usernames and the default password. This allowed him to automatically break into thousands of accounts that were using the default password and extract data from them.

Applying For Your Next Job May Be an Automated Nightmare

If you think looking for a job is already daunting, anxiety-riddled, and unpleasant, just wait until the algorithms take over the hiring process. When they do, a newfangled ‘digital recruiter’ like VCV, which just received $1.7 million in early investment, hopes it will look something like this: First, a search bot will be used to scan CVs by the thousands, yours presumably among them. If it’s picked out of the haystack, you will be contacted by a chatbot. Over SMS, the bot will set an appointment for a phone interview, which will be conducted by an automated system enabled by voice recognition AI. Next, the system will ask you, the applicant, to record video responses to a set of predetermined interview questions. Finally, the program can use facial recognition and predictive analytics to complete the screening, algorithmically determining whether the nervousness, mood, and behavior patterns you exhibit make you a fit for the company. If you pass all that, then you will be recommended for an in-person job interview.

[…] VCV, which did not respond to a request for comment, is far from alone here. A growing suite of startups is pitching AI-driven recruitment services, promising to save corporations millions of dollars throughout the hiring process by reducing overhead, to pluck more ideal candidates out of obscurity, and to reduce bias in the hiring process. Most offer little to no evidence of how they actually do so. VCV’s much-larger competitor, HireVue, which has raked in a staggering $93 million in funding and is backed by top-tier Silicon Valley venture capital firms like Sequoia, is hocking many of the same services. It counts 700 companies as its clients, including, it says, Urban Outfitters, Intel, Honeywell, and Unilever. AllyO, which was founded in 2015, and “utilizes deep workflow conversational AI to fully automate end to end recruiting workflow” has $19 million in backing.

The Feds Are Dropping Child Porn Cases Instead of Revealing Their Surveillance Systems

The Department of Justice has been dismissing child pornography cases in order to not reveal information about the software programs used as the basis for the charges. An array of cases suggest serious problems with the tech tools used by federal authorities. But the private entities who developed these tools won’t submit them for independent inspection or hand over hardly any information about how they work, their error rates, or other critical information. As a result, potentially innocent people are being smeared as pedophiles and prosecuted as child porn collectors, while potentially guilty people are going free so these companies can protect “trade secrets.” The situation suggests some of the many problems that can arise around public-private partnerships in catching criminals and the secretive digital surveillance software that it entails (software that’s being employed for far more than catching child predators).

With the child pornography cases, “the defendants are hardly the most sympathetic,” notes Tim Cushing at Techdirt. Yet that’s all the more reason why the government’s antics here are disturbing. Either the feds initially brought bad cases against people whom they just didn’t think would fight back, or they’re willing to let bad behavior go rather than face some public scrutiny. An extensive investigation by ProPublica “found more than a dozen cases since 2011 that were dismissed either because of challenges to the software’s findings, or the refusal by the government or the maker to share the computer programs with defense attorneys, or both,” writes Jack Gillum. Many more cases raised issues with the software as a defense. “Defense attorneys have long complained that the government’s secrecy claims may hamstring suspects seeking to prove that the software wrongly identified them,” notes Gillum. “But the growing success of their counterattack is also raising concerns that, by questioning the software used by investigators, some who trade in child pornography can avoid punishment.”

Scientists Have Developed a Brain Implant That Can Read People’s Minds

The team at the University of California, San Francisco says the technology is “exhilarating.” They add that their findings, published in the journal Nature, could help people when disease robs them of their ability to talk. The mind-reading technology works in two stages. First an electrode is implanted in the brain to pick up the electrical signals that maneuver the lips, tongue, voice box and jaw. Then powerful computing is used to simulate how the movements in the mouth and throat would form different sounds. This results in synthesized speech coming out of a “virtual vocal tract.”

Why Energy Is A Big And Rapidly Growing Problem For Data Centers

U.S. data centers use more than 90 billion kilowatt-hours of electricity a year, requiring roughly 34 giant (500-megawatt) coal-powered plants. Global data centers used roughly 416 terawatts (4.16 x 1014 watts) (or about 3% of the total electricity) last year, nearly 40% more than the entire United Kingdom. And this consumption will double every four years.

Streaming video has already changed the game, but the explosion of artificial intelligence and internet-connected devices will change the entire landscape. AI is the future, and AI is hungry for processing power. IoT is projected to exceed 20 billion devices by 2020 (some analysts believe we will reach that number this year alone). Given there are currently 10 billion internet-connected devices, doubling that to 20 billion will require massive increases to our data center infrastructure, which will massively increase our electricity consumption.

How on earth can we possibly build all the power plants required to supply electricity to twice as many data centers in the next four years? The simple answer is that we can’t.

Facial Recognition to board a plane

A boarding technology for travelers using JetBlue is causing controversy due to a social media thread on the airline’s use of facial recognition. Last week, traveler MacKenzie Fegan described her experience with the biometric technology in a social media post that got the attention of JetBlue’s official account. She began: “I just boarded an international @JetBlue flight. Instead of scanning my boarding pass or handing over my passport, I looked into a camera before being allowed down the jet bridge. Did facial recognition replace boarding passes, unbeknownst to me? Did I consent to this?” JetBlue was ready to offer Twitterized sympathy: “You’re able to opt out of this procedure, MacKenzie. Sorry if this made you feel uncomfortable.”

But once you start thinking about these things, your thoughts become darker. Fegan wanted to know how JetBlue knew what she looked like. JetBlue explained: “The information is provided by the United States Department of Homeland Security from existing holdings.” Fegan wondered by what right a private company suddenly had her bioemtric data. JetBlue insisted it doesn’t have access to the data. It’s “securely transmitted to the Customs and Border Protection database.” Fegan wanted to know how this could have possibly happened so quickly. Could it be that in just a few seconds her biometric data was whipped “securely” around government departments so that she would be allowed on the plane? JetBlue referred her to an article on the subject, which was a touch on the happy-PR side. Fegan was moved, but not positively, by the phrase “there is no pre-registration required.”

Google Home calls the Police, always listening

According to ABC News, officers were called to a home outside Albuquerque, New Mexico this week when a Google Home called 911 and the operator heard a confrontation in the background. Police say that Eduardo Barros was house-sitting at the residence with his girlfriend and their daughter. Barros allegedly pulled a gun on his girlfriend when they got into an argument and asked her: “Did you call the sheriffs?” Google Home apparently heard “call the sheriffs,” and proceeded to call the sheriffs. A SWAT team arrived at the home and after negotiating for hours, they were able to take Barros into custody… “The unexpected use of this new technology to contact emergency services has possibly helped save a life,” Bernalillo County Sheriff Manuel Gonzales III said in a statement.

“It’s easy to imagine police getting tired of being called to citizen’s homes every time they watch the latest episode of Law and Order,” quips Gizmodo. But they also call the incident “a clear reminder that smart home devices are always listening.”

Stare Into The Lights My Pretties

Our phones make us feel like social-media activists, but they’re actually turning us into bystanders

On April 9, 2017, a video of a man being dragged off a United Airlines flight was posted on the internet and went viral. But I don’t need to tell you that. Each of your most outspoken Facebook friends probably posted about the event, highlighting the aspects of it that best reinforced their worldview. The incident was covered all over American media and even sparked outrage in China.

The collective focus may have now moved on to its next source of outrage, but there was something that only a few people noticed in the moment: a plane full of quiet passengers. Other than one woman screaming, hardly anyone else on the plane seemed bothered enough by what was happening to raise a ruckus. This calm scene is a rather unlikely precursor to the uproar that unfolded hours later on Facebook and Twitter.

Instead of intervening in the assault, the passengers stoically took out their cameraphones and pointed them toward David Dao, whose body was dragged along the aisle of the airplane, glasses askew, face bloody, and belly exposed. Their immediate response was not to speak out against the outrageousness of what was going on, but to create an instant digital record of the incident.

The act of recording a violent event but staying silent is a modern manifestation of the bystander effect. The bystander effect occurs when people refrain from intervening in an emergency situation because there are other people around. Psychologists Bibb Latané and John Darley, who first demonstrated the bystander effect, attributed this phenomenon to two factors: a perceived diffusion of responsibility (thinking that someone else in the group will help) and social influence (where observers see the inaction of the group as evidence that there is no reason to intervene).

Our cameraphones may make us feel like social-media activists, but when we’re recording an event instead of intervening, we’re actually just real-world bystanders. There is a gulf of dissonance between what we publicly declare as our values—online or otherwise—and how we act.

In the past few years, there have been scores of videos depicting abuse that have been recorded and then disseminated online. In New Jersey in 2014, people watched and recorded as a woman was punched and kicked by a co-worker. (The only one who said anything was her 2-year-old child, who knew, naturally, to help.) In Philadelphia in 2016, a man was beating and punching a woman in the streets while an observer videotaped the event. Even without violence, the temptation to be a recording bystander prevails. Take the case of a 2013 fire in Pincourt, Canada, where observers recorded the house burning to the ground from all angles—but nobody called the fire station.

To prevent a culture of disembodied bystanders, we must learn to better asses the appropriate actions when we’re in a situation that demands immediate attention. In doing so, we hopefully transcend the idea that recording an event is a replacement for action.

Sam Gregory is a program director at WITNESS, a global organization that incorporates video technology into human-rights advocacy. The goal of Gregory’s primary project, Mobil-Eyes-Us, is to find ways to translate “co-presence” in to action. “In these types of events, people do freeze,” Gregory says. “The goal is to get over the freeze reaction.”

Filming events doesn’t relinquish our moral responsibility to intervene, but Gregory believes it’s “a step up from the Kitty Genovese incident,” which was an infamous 1964 stabbing in Queens, New York that 38 neighbors observed over a half hour, but none of them called the police or stepped in to intervene. If those 38 people lived in an age of smartphones, you can safely bet what a large portion of them would be doing.

Gregory says the idea of his project is to develop “witnessing literacy:” a repertoire of actions people can take in order to prevent unethical witnessing. To that end, the WITNESS website has abundant resources and guides, from teaching observers how to capture and preserve video as evidence to how to protect your identity on Youtube. The organization has also produced a mini-guide to capturing ethical footage and a video showing how to share the United Airlines video in a way that would protect the victim, David Dao:

This said, documenting an event is only a viable contribution to an inclement situation if it is then used in an ethical manner; it’s not the recording that matters, it’s what you do with it. For example, a video of an assault on your phone helps no one if it’s not formally filed to the police or uploaded to the internet in an effective, ethical manner. And with all that effort, wouldn’t it have been better to try and pipe-up in the moment? (If all else fails, you might also try to sing, which is what this one brave woman did to fend off a man harassing a woman on public transport.)

Viral videos that incite outrage and prod at our sense of justice demonstrate both the difficulty and necessity of acting in accordance with our values. We argue so much online about the actions of people who we do not know and will never meet, and this takes time away from looking at our own actions and preparing ourselves to act better in similar situations. As we thank the one woman on the plane who dared to speak up on the United flight, we should consider what else productive protest looks like so that each of us has a repertoire of counter-violent actions to take.

For now, those of us who wish to believe in a world where people look out for each other will have to take it upon themselves to lead by example. We should learn how to translate our digital frustrations to analog action.

Microsoft Turned Down Facial-Recognition Sales over “Human Rights Concerns”

Microsoft recently rejected a California law enforcement agency’s request to install facial recognition technology in officers’ cars and body cameras due to human rights concerns, company President Brad Smith said on Tuesday. Microsoft concluded it would lead to innocent women and minorities being disproportionately held for questioning because the artificial intelligence has been trained on mostly white and male pictures. AI has more cases of mistaken identity with women and minorities, multiple research projects have found.

Smith explained the decisions as part of a commitment to human rights that he said was increasingly critical as rapid technological advances empower governments to conduct blanket surveillance, deploy autonomous weapons and take other steps that might prove impossible to reverse. Smith also said at a Stanford University conference that Microsoft had declined a deal to install facial recognition on cameras blanketing the capital city of an unnamed country that the nonprofit Freedom House had deemed not free. Smith said it would have suppressed freedom of assembly there.

On the other hand, Microsoft did agree to provide the technology to an American prison, after the company concluded that the environment would be limited and that it would improve safety inside the unnamed institution.

Chinese companies using GPS tracking device smartwatches to monitor, alert street cleaners

Street cleaners in parts of China are reportedly being forced to wear GPS-tracking smartwatches so employers can monitor how hard they work, sparking public outrage and concern over increasing mass surveillance across the country.

If the smartwatch detects a worker standing still for over 20 minutes, it sounds an alarm. “Add oil, add oil [work harder, work harder!],” the wristbands’ alarm says, several cleaners from the eastern city of Nanjing told Jiangsu Television earlier this month.

The smartwatch not only tracks the cleaners’ locations but also reports their activity back to the company’s control room, where a big screen displays their locations as a cluster of red dots on a map.

“It knows everything,” an anonymous cleaner told a reporter in the Jiangsu Television report. “Supervisors will come if we don’t move after hearing the alarm.”

Following backlash, the company said it removed the alarm function from the smartwatch, but reports maintain the employees are still being required to wear the device so their location can be tracked.

The Chinese Government is already in the process of building a Social Credit System aimed at monitoring the behaviour of its 1.4 billion citizens with the help an extensive network of CCTV cameras and facial recognition technology.

Senior researcher for Human Rights Watch China Maya Wang said the use of surveillance technology by the Government was sending private companies a message that it was “okay to [monitor] people”.