Archives 31 October 2018

Pentagon Wants to Predict Anti-Trump Protests Using Social Media Surveillance

A series of research projects, patent filings, and policy changes indicate that the Pentagon wants to use social media surveillance to quell domestic insurrection and rebellion.

The social media posts of American citizens who don’t like President Donald Trump are the focus of the latest US military-funded research. The research, funded by the US Army and co-authored by a researcher based at the West Point Military Academy, is part of a wider effort by the Trump administration to consolidate the US military’s role and influence on domestic intelligence.

The vast scale of this effort is reflected in a number of government social media surveillance patents granted this year, which relate to a spy program that the Trump administration outsourced to a private company last year. Experts interviewed by Motherboard say that the Pentagon’s new technology research may have played a role in amendments this April to the Joint Chiefs of Staff homeland defense doctrine, which widen the Pentagon’s role in providing intelligence for domestic “emergencies,” including an “insurrection.”

It’s no secret that the Pentagon has funded Big Data research into how social media surveillance can help predict large-scale population behaviours, specifically the outbreak of conflict, terrorism, and civil unrest.

Much of this research focuses on foreign theatres like the Middle East and North Africa — where the 2011 Arab Spring kicked off an arc of protest that swept across the region and toppled governments.

Since then, the Pentagon has spent millions of dollars finding patterns in posts across platforms like Facebook, Twitter, Instagram, Tumblr, and beyond to enable the prediction of major events.

But the Pentagon isn’t just interested in anticipating surprises abroad. The research also appears to be intended for use in the US homeland.

Datasets for the research were collected using the Apollo Social Sensing Tool, a real-time event tracking software that collects and analyses millions of social media posts.

The tool was originally developed under the Obama administration back in 2011 by the US Army Research Laboratory and US Defense Threat Reduction Agency, in partnership with Rensselaer Polytechnic Institute, the University of Illinois, IBM, and Caterva (a social marketing company that in 2013 was folded into a subsidiary of giant US government IT contractor, CSC). Past papers associated with the project show that the tool has been largely tested in foreign theatres like Haiti, Egypt, and Syria.

But the use of the Apollo tool to focus on protests in the US homeland has occurred under the Trump administration. The ‘election’ dataset compiled using Apollo for the 2018 US Army-funded study is comprised of 2.5 million tweets sent between October 26, 2016, and December 20, 2016, using the words “Trump”, “Clinton,” and “election.”

Tweets were geolocated to focus on “locations where protests occurred following the election” based on user profiles. Locations were then triangulated against protest data from “online news outlets across the country.”

The millions of tweets were used to make sense of the “frequencies of the protests in 39 cities” using 18 different ways of measuring the “size, structure and geography” of a network, along with two ways of measuring how that network leads a social group to become “mobilized,” or take action.

In short, this means that “the social network can be a predictor of mobilization, which in turn is a predictor of the protest.” This pivotal finding means that extensive real-time monitoring of American citizens’ social media activity can be used to predict future protests.

What Children Want to Know About Computers

When visiting a series of eight primary school class rooms recently, I talked to children (aged 5 -12 years old) about how computers work. They drew pictures of what they thought is inside a computer, and then we discussed the drawings as a class. Many of the children knew the names of the components within a computer: a chip, memory, a disc, and they were often insistent that there should be a fan in there. They knew that there would be wires inside, and that it would need a battery to make it work. The child who created drawing 1 has made a nice job of piecing together a possible design from what they knew about computers – can you spot what is missing though?

Drawing 1.

Drawing 2.

The artist of drawing 2 knows there is a chip inside (made by HP in this case!) and to their credit they know there is code too. Notice that the code is not physically located on the memory or the chip but somewhere in the wires. In general there was some puzzlement about how code related to the computer, as exemplified by the artist of drawing 3 who confessed “I know a computer is full of code and all devices. I am not sure what it looked like so I just scribbled.”

Drawing 3. 

Often, the children spent a while thinking about what is outside the computer and how information might get inside. It was quite common to see pictures in which the artist had folded the page to show this distinction but it was often a mystery how pressing a key or touching the screen might make something happen in the computer. Children who had spent time tinkering with computers at home had an advantage here: “I broke my keyboard once and I saw what was inside. It would send a signal from key to computer to the monitor”.

What the pictures, and subsequent classroom discussions told me is that the children know names of components within a computer, and possibly some isolated facts about them. None of the pictures showed accurately how the components work together to perform computation, although the children were ready and willing to reason about this with their classmates. Although some of the children had programmed in the visual programming language, none of them knew how the commands they wrote in Scratch would be executed in the hardware inside a computer. One boy, who had been learning about variables in Scratch the previous day wanted to know whether if he looked in his computer he would really see apps with boxes full of variables in them. I love that question because it reveals the mysterious boundary between intangible, invisible information and the small lump of silicon which processes it.

To be clear, I am not criticizing the children, who were curious, interested and made perfectly reasonable inferences based on the facts they picked up in their everyday lives. But I think that computer science educators can do better here. Our discipline is built upon the remarkable fact that we can write instructions in a representation which makes sense to humans and then automatically translate them into an equivalent representation which can be followed by a machine dumbly switching electrical pulses on and off. Children are not going to be able to figure that out for themselves by dissecting old computers or by making the Scratch cat dance. We need to get better at explicitly explaining this in interesting ways.

Children are currently piecing together their everyday experiences with technology with facts that adults tell them to try to make sense of how computers work. This can lead to some confusion, particularly if the adults in their lives are also unsure. One child thought, for example, that if you paid more money, then it would make Wi-Fi stronger. Others were curious about how Wi-Fi works on a train, and whether you really need to stop using your phone on a plane. A student advised the class that if we needed to save space on our phones, then we should delete videos from YouTube. The children, like most Windows users, wanted to know why their computers “freeze”, speculating that it could be because the chip is asleep or that too many people are using Wi-Fi. There was also a sense of wonderment and curiosity. A young boy was fascinated when he read about super computers and wanted to know more: do super computers have really big chips in them? A class of eleven-year-olds gravely debated whether people would be more or less clever if the computer had never been invented. These are the sorts of questions about computers which children want to explore. It’s our job as computer scientists, and as educators, to help them.

Suspicious packages spotlight vast mail surveillance system, United States

The U.S. Postal Service regularly photographs the front and back of every piece of U.S. mail, or about 150 billion parcels, envelopes, and postcards every year. A longstanding practice known as the “mail cover” program enables law enforcement to obtain address information and images of the outsides of mail as part of an investigation without the need for a warrant through the Postal Inspection Service, the U.S. Postal Service’s policing arm.

As part of the mail cover program, mail is routinely digitally photographed as part of the sorting process and even available for recipients to digitally preview in some areas. Apart from threats like bombs, the department says its main focus is on mail theft, fraud, and narcotics cases.

Because a mail cover involves reading only information on the outside of the envelope or package, courts have not ruled it a violation of the Fourth Amendment. But that hasn’t stilled concerns about privacy and abuse.

For decades, the relatively obscure program has come under criticism for its lack of protections, for allowing data to be shared in broader cases than postal regulations allow, and for operating largely outside of public view. Critics have also warned that extensive surveillance of someone’s mail, especially combined with other surveillance, could create privacy violations.

After an audit, the Postal Service inspector general determined in 2014 that the Inspection Service did not have “sufficient controls” in place to ensure that its employees followed the agency’s policies in handling national security mail covers.

“Insufficient controls over the mail covers program could hinder the Postal Inspection Service’s ability to conduct effective investigations, lead to public concerns over privacy of mail, and harm the Postal Service’s brand,” it warned in a report.

Mail covers drew fire in the 1970s as part of government surveillance of groups deemed subversive, after a 15-year-old girl was apparently put under surveillance for writing a letter to a radical group. She had reportedly actually intended to contact a similarly named organization as part of a homework assignment.

With 5G, you won’t just be watching video. It’ll be watching you, too

What happens when movies can direct themselves? Remember the last time you felt terrified during a horror movie? Take that moment, and all the suspense leading up to it, and imagine it individually calibrated for you. It’s a terror plot morphing in real time, adjusting the story to your level of attention to lull you into a comfort zone before unleashing a personally timed jumpscare.

Or maybe being scared witless isn’t your idea of fun. Think of a rom-com that stops from going off the rails when it sees you rolling your eyes. Or maybe it tweaks the eye color of that character finally finding true love so it’s closer to your own, a personalized subtlety to make the love-struck protagonist more relatable.

You can thank (or curse) 5G for that.

When most people think of 5G, they’re envisioning an ultra-fast, high-bandwidth connection that lets you download seasons of your favorite shows in minutes. But 5G’s possibilities go way beyond that, potentially reinventing how we watch video, and opening up a mess of privacy uncertainties.

“Right now you make a video much the same way you did for TV,” Dan Garraway, co-founder of interactive video company Wirewax, said in an interview this month. “The dramatic thing is when you turn video into a two-way conversation. Your audience is touching and interacting inside the experience and making things happen as a result.” The personalized horror flick or tailored rom-com? They would hinge on interactive video layers that use emotional analysis based on your phone’s front-facing camera to adjust what you’re watching in real time. You may think it’s far-fetched, but one of key traits of 5G is an ultra-responsive connection with virtually no lag, meaning the network and systems would be fast enough to react to your physical responses.

Before you cast a skeptical eye at 5G, consider how the last explosion of mobile connectivity, from 3G to 4G LTE, changed how we consumed video. Being able to watch — and in YouTube’s case, upload — video on a mobile device reimagined how we watch TV and the types of programming that are big business. A decade ago, when Netflix was about two years into its transition to streaming from DVD mailings, its annual revenue $1.4 billion. This year it’s on track for more than 10 times that ($15.806 billion).

5G’s mobility can bring video experiences to new locations. Spare gives the example straight out of Minority Report, of entering a Gap retail store and being greeted by name. But taken further, the store could develop a three-dimensional video concierge for your phone — a pseudo-hologram that helps you find what you’re looking for. With 5G’s ability to make virtual and augmented reality more accessible, you could get a snapshot of what an outfit might look like on you without having to try it on.

Where things get crazy — and creepy — is imagining how 5G enables video to react to your involuntary cues and all the data you unconsciously provide. A show could mimic the weather or time of day to more closely match the atmosphere in real life.

For all the eye-popping possibilities, 5G unleashes a tangle of privacy questions. 5G could leverage every piece of visual information a phone can see on cameras front and back in real time. This level of visual imagery collection could pave the way for video interaction to happen completely automatically.

It’s also a potential privacy nightmare. But the lure of billions of dollars have already encouraged companies to make privacy compromises.

And that may make it feel like your personalized horror show is already here.

Thousands Of Swedes Are Inserting Microchips Under Their Skin

In Sweden, a country rich with technological advancement, thousands have had microchips inserted into their hands.

The chips are designed to speed up users’ daily routines and make their lives more convenient — accessing their homes, offices and gyms is as easy as swiping their hands against digital readers.

More than 4,000 Swedes have adopted the technology, with one company, Biohax International, dominating the market. The chipping firm was started five years ago by Jowan Osterlund, a former professional body piercer.

Many early adopters come from Stockholm’s thriving startup scene. Erik Frisk, a 30-year-old Web developer and designer, says he was really curious about the technology as soon as he heard about it and decided to get his own chip in 2014.

Sweden’s largest train company has started allowing commuters to use chips instead of tickets, and there’s talk that the chips could soon be used to make payments in shops and restaurants.

Swedes are used to sharing personal information, with many online purchases and administrative bodies requiring their social security numbers. Mobile phone numbers are widely available in online search databases, and people can easily look up each other’s salaries by calling the tax authority.

Google Is Teaching Children How to Act Online. Is It the Best Role Model?

Google is on a mission to teach children how to be safe online. That is the message behind “Be Internet Awesome,” a so-called digital-citizenship education program that the technology giant developed for schools.

The lessons include a cartoon game branded with Google’s logo and blue, red, yellow and green color palette. The game is meant to help students from third grade through sixth guard against schemers, hackers and other bad actors.

But critics say the company’s recent woes — including revelations that it was developing a censored version of its search engine for the Chinese market and had tracked the whereabouts of users who had explicitly turned off their location history — should disqualify Google from promoting itself in schools as a model of proper digital conduct.

Among other things, these critics argue, the company’s lessons give children the mistaken impression that the main threat they face online is from malicious hackers and bullies, glossing over the privacy concerns that arise when tech giants like Google itself collect users’ personal information and track their actions online.

American corporate giants are no strangers to the country’s schools.

In the 1970s, General Motors circulated a free booklet in public schools that featured cartoon characters like Harry Hydrocarbon, who played down concerns about the health risks of industrial pollution and suggested that air pollution would soon not be a problem, according to a 1979 report, “Hucksters in the Classroom: A Review of Industry Propaganda in Schools.”

In the 1990s, Procter & Gamble promoted its own curriculum, “Decision: Earth,” in schools. Among other things, it instructed children that synthetic diapers were no more harmful for the environment than cloth diapers.

Around the same time, Campbell Soup sponsored a classroom kit called the “Prego Thickness Experiment.” According to a 1997 article in The New York Times, “Corporate Classrooms and Commercialism,” the kit was supposed to teach children the scientific method — by having them “prove” that Prego pasta sauce was thicker than rival Ragu.

Critics see a similar self-serving agenda with “Be Internet Awesome,” which presents malicious third parties as the primary online threat to children, while failing to teach them how to navigate corporate data-mining practices.

Silicon Valley’s dirty secret: Using a shadow workforce of contract employees to drive profits

As the gig economy grows, the ratio of contract workers to regular employees in corporate America is shifting. Google, Facebook, Amazon, Uber and other Silicon Valley tech titans now employ thousands of contract workers to do a host of functions — anything from sales and writing code to managing teams and testing products. This year at Google, contract workers outnumbered direct employees for the first time in the company’s 20-year history.

It’s not only in Silicon Valley. The trend is on the rise as public companies look for ways to trim HR costs or hire in-demand skills in a tight labor market. The U.S. jobless rate dropped to 3.7 percent in September, the lowest since 1969, down from 3.9 percent in August, according to the Bureau of Labor Statistics.

Some 57.3 million Americans, or 36 percent of the workforce, are now freelancing, according to a 2017 report by Upwork. In San Mateo and Santa Clara counties alone, there are an estimated 39,000 workers who are contracted to tech companies, according to one estimate by University of California Santa Cruz researchers.

Spokespersons at Facebook and Alphabet declined to disclose the number of contract workers they employ. A spokesperson at Alphabet cited two main reasons for hiring contract or temporary workers. One reason is when the company doesn’t have or want to build out expertise in a particular area such as doctors, food service, customer support or shuttle bus drivers. Another reason is a need for temporary workers when there is a sudden spike in workload or to cover for an employee who is on leave.

Now Apps Can Track You Even After You Uninstall Them

If it seems as though the app you deleted last week is suddenly popping up everywhere, it may not be mere coincidence. Companies that cater to app makers have found ways to game both iOS and Android, enabling them to figure out which users have uninstalled a given piece of software lately—and making it easy to pelt the departed with ads aimed at winning them back.

Adjust, AppsFlyer, MoEngage, Localytics, and CleverTap are among the companies that offer uninstall trackers, usually as part of a broader set of developer tools. Their customers include T-Mobile US, Spotify Technology, and Yelp. (And Bloomberg Businessweek parent Bloomberg LP, which uses Localytics.) Critics say they’re a fresh reason to reassess online privacy rights and limit what companies can do with user data.

Uninstall tracking exploits a core element of Apple Inc.’s and Google’s mobile operating systems: push notifications. Developers have always been able to use so-called silent push notifications to ping installed apps at regular intervals without alerting the user—to refresh an inbox or social media feed while the app is running in the background, for example. But if the app doesn’t ping the developer back, the app is logged as uninstalled, and the uninstall tracking tools add those changes to the file associated with the given mobile device’s unique advertising ID, details that make it easy to identify just who’s holding the phone and advertise the app to them wherever they go.

At its best, uninstall tracking can be used to fix bugs or otherwise refine apps without having to bother users with surveys or more intrusive tools. But the ability to abuse the system beyond its original intent exemplifies the bind that accompanies the modern internet, says Gillula. To participate, users must typically agree to share their data freely, probably forever, not knowing exactly how it may be used down the road. “As an app developer, I would expect to be able to know how many people have uninstalled an app,” he says. “I would not say that, as an app developer, you have a right to know exactly who installed and uninstalled your app.”