Archives 2018

Facebook Filed A Patent To Predict Your Household’s Demographics Based On Family Photos

Facebook has submitted a patent application for technology that would predict who your family and other household members are, based on images and captions posted to Facebook, as well as your device information, like shared IP addresses. The application, titled “Predicting household demographics based on image data,” was originally filed May 10, 2017, and made public today.

The system Facebook proposes in its patent application would use facial recognition and learning models trained to understand text to help Facebook better understand whom you live with and interact with most. The technology described in the patent looks for clues in your profile pictures on Facebook and Instagram, as well as photos of you that you or your friends post.

It would note the people identified in a photo, and how frequently the people are included in your pictures. Then, it would assess information from comments on the photos, captions, or tags (#family, #mom, #kids) — anything that indicates whether someone is a husband, daughter, cousin, etc. — to predict what your family/household actually looks like. According to the patent application, Facebook’s prediction models would also analyze “messaging history, past tagging history, [and] web browsing history” to see if multiple people share IP addresses (a unique identifier for every internet network).

Dutch Government Report Says Microsoft Office Telemetry Collection Breaks EU GDPR Laws

Microsoft broke Euro privacy rules by carrying out the “large scale and covert” gathering of private data through its Office apps, according to a report commissioned by the Dutch government.

It was found that Microsoft was collecting telemetry and other content from its Office applications, including email titles and sentences where translation or spellchecker was used, and secretly storing the data on systems in the United States.

Those actions break Europe’s new GDPR privacy safeguards, it is claimed, and may put Microsoft on the hook for potentially tens of millions of dollars in fines. The Dutch authorities are working with the corporation to fix the situation, and are using the threat of a fine as a stick to make it happen.

The investigation was jumpstarted by the fact that Microsoft doesn’t publicly reveal what information it gathers on users and doesn’t provide an option for turning off diagnostic and telemetry data sent by its Office software to the company as a way of monitoring how well it is functioning and identifying any software issues.

Algorithms viewed as ‘unfair’ by consumers

The US-based Pew Research Center has found the American public is growing increasingly distrustful of the use of computer algorithms in a variety of sectors, including finance, media and the justice system.

report released over the weekend found that a broad section of those surveyed feel that computer programs will always reflect some level of human bias, that they might violate privacy, fail to capture the nuance of human complexity or simply be unfair.

Scope creep with Australia metadata retention

Telecommunications industry group Communications Alliance has revealed details of dozens of state and federal departments and agencies it claims are accessing so-called communications ‘metadata’.

The 2015 legislation that introduced the data retention regime authorised a list of “criminal law-enforcement agencies” to obtain warrant-free access to metadata. Those agencies included federal, state and territory police agencies, a number of anti-corruption bodies, Border Force, the Australian Securities and Investments Commission; and the Australian Competition and Consumer Commission.

However, last month at the hearing of an inquiry into the government’s bill aimed at enhancing police access to encrypted communications services, Communications Alliance CEO John Stanton said that a significantly larger number of organisations were accessing information kept by telcos to meet their data retention obligations.

In addition to police agencies and other organisations listed in the data retention legislation, it includes Centrelink, the Australian Taxation Office, Australia Post’s Corporate Security Group, Workplace Health and Safety, Work Safe Victoria, the Taxi Services Commission and a number of local councils.

Australia’s near-real-time facial recognition system, chilling effects

Civil rights groups have warned a vast, powerful system allowing the near real-time matching of citizens’ facial images risks a “profound chilling effect” on protest and dissent.

The technology – known in shorthand as “the capability” – collects and pools facial imagery from various state and federal government sources, including driver’s licences, passports and visas.

The biometric information can then rapidly – almost in real time – be compared with other sources, such as CCTV footage, to match identities.

The system, chiefly controlled by the federal Department of Home Affairs, is designed to give intelligence and security agencies a powerful tool to deter identity crime, and quickly identify terror and crime suspects.

But it has prompted serious concern among academics, human rights groups and privacy experts. The system sweeps up and processes citizens’ sensitive biometric information regardless of whether they have committed or are suspected of an offence.

Chinese ‘Gait Recognition’ Tech IDs People By How They Walk; Police Have Started Using It on Streets of Beijing and Shanghai

Already used by police on the streets of Beijing and Shanghai, “gait recognition” is part of a push across China to develop artificial-intelligence and data-driven surveillance that is raising concern about how far the technology will go. Huang Yongzhen, the CEO of Watrix, said that its system can identify people from up to 50 meters (165 feet) away, even with their back turned or face covered. This can fill a gap in facial recognition, which needs close-up, high-resolution images of a person’s face to work. “You don’t need people’s cooperation for us to be able to recognize their identity,” Huang said in an interview in his Beijing office. “Gait analysis can’t be fooled by simply limping, walking with splayed feet or hunching over, because we’re analyzing all the features of an entire body.”

Blockchain-based elections would be a disaster for democracy

If you talk to experts on election security they’ll tell you that we’re nowhere close to being ready for online voting. “Mobile voting is a horrific idea,” said election security expert Joe Hall when I asked him about a West Virginia experiment with blockchain-based mobile voting back in August.

But on Tuesday, The New York Times published an opinion piece claiming the opposite.

“Building a workable, scalable, and inclusive online voting system is now possible, thanks to blockchain technologies,” writes Alex Tapscott, whom the Times describes as co-founder of the Blockchain Research Institute.

Tapscott is wrong—and dangerously so. Online voting would be a huge threat to the integrity of our elections—and to public faith in election outcomes.

Tapscott focuses on the idea that blockchain technology would allow people to vote anonymously while still being able to verify that their vote was included in the final total. Even assuming this is mathematically possible—and I think it probably is—this idea ignores the many, many ways that foreign governments could compromise an online vote without breaking the core cryptographic algorithms.

For example, foreign governments could hack into the computer systems that governments use to generate and distribute cryptographic credentials to voters. They could bribe election officials to supply them with copies of voters’ credentials. They could hack into the PCs or smartphones voters use to cast their votes. They could send voters phishing emails to trick them into revealing their voting credentials—or simply trick them into thinking they’ve cast a vote when they haven’t.

Energy cost of ‘mining’ bitcoin more than twice that of copper or gold

The amount of energy required to “mine” one dollar’s worth of bitcoin is more than twice that required to mine the same value of copper, gold or platinum, according to a new paper, suggesting that the virtual work that underpins bitcoin, ethereum and similar projects is more similar to real mining than anyone intended.

One dollar’s worth of bitcoin takes about 17 megajoules of energy to mine, according to researchers from the Oak Ridge Institute in Cincinnati, Ohio, compared with four, five and seven megajoules for copper, gold and platinum.

Other cryptocurrencies also fair poorly in comparison, the researchers write in the journal Nature Sustainability, ascribing a cost-per-dollar of 7MJ for ethereum and 14MJ for the privacy focused cryptocurrency monero. But all the cryptocurrencies examined come off well compared with aluminium, which takes an astonishing 122MJ to mine one dollar’s worth of ore.

Facebook Allowed Advertisers to Target Users Interested in “White Genocide”—Even in Wake of Pittsburgh Massacre

Apparently fueled by anti-Semitism and the bogus narrative that outside forces are scheming to exterminate the white race, Robert Bowers murdered 11 Jewish congregants as they gathered inside their Pittsburgh synagogue, federal prosecutors allege. But despite long-running international efforts to debunk the idea of a “white genocide,” Facebook was still selling advertisers the ability to market to those with an interest in that myth just days after the bloodshed.

A simple search of Facebook pages also makes plain that there are tens of thousands of users with a very earnest interest in “white genocide,” shown through the long list of groups with names like “Stop White South African Genocide,” “White Genocide Watch,” and “The last days of the white man.” Images with captions like “Don’t Be A Race Traitor” and “STOP WHITE GENOCIDE IN SOUTH AFRICA” are freely shared in such groups, providing a natural target for anyone who might want to pay to promote deliberately divisive and incendiary hate-based content.

Only 22% of Americans Now Trust Facebook’s Handling of Personal Info

Facebook is the least trustworthy of all major tech companies when it comes to safeguarding user data, according to a new national poll conducted for Fortune, highlighting the major challenges the company faces following a series of recent privacy blunders. Only 22% of Americans said that they trust Facebook with their personal information, far less than Amazon (49%), Google (41%), Microsoft (40%), and Apple (39%).

In question after question, respondents ranked the company last in terms of leadership, ethics, trust, and image… Public mistrust extended to Zuckerberg, Facebook’s public face during its privacy crisis and who once said that Facebook has “a responsibility to protect your information, If we can’t, we don’t deserve it.” The company subsequently fell victim to a hack but continued operating as usual, including debuting a video-conferencing device intended to be used in people’s living rooms or kitchens and that further extends Facebook’s reach into more areas outside of personal computers and smartphones. Only 59% of respondents said they were “at least somewhat confident” in Zuckerberg’s leadership in the ethical use of data and privacy information, ranking him last among four other tech CEOS…

As for Facebook, the social networking giant may have a difficult time regaining public trust because of its repeated problems. Consumers are more likely to forgive a company if they believe a problem was an aberration rather than a systemic failure by its leadership, Harris Poll CEO John Gerzema said.

The article concludes that “For now, the public isn’t in a forgiving mood when it comes to Facebook and Zuckerberg.”

What Your Phone is Telling Wall Street

Your phone knows where you shop, where you work and where you sleep. Hedge funds are very interested in such data, so they are buying it.

When Tesla Chief Executive Elon Musk said the car maker would work around the clock to boost production of its Model 3 sedan, the number crunchers at Thasos Group decided to watch. They circled Tesla’s 370 acres in Fremont, Calif., on an online map, creating a digital corral to isolate smartphone location signals that emanated from within it. Thasos, which leases databases of trillions of geographic coordinates collected by smartphone apps, set its computers to find the pings created at Tesla’s factory, then shared the data with its hedge-fund clients [Editor’s note: the link may be paywalled; alternative source], showing the overnight shift swelled 30% from June to October.

Last month, many on Wall Street were surprised when Tesla disclosed a rare quarterly profit, the result of Model 3 production that had nearly doubled in three months. Shares shot up 9.1% the next day. Thasos is at the vanguard of companies trying to help traders get ahead of stock moves like that using so-called alternative data. Such suppliers might examine mine slag heaps from outer space, analyze credit-card spending data or sort through construction permits. Thasos’s specialty is spewing out of your smartphone.

Thasos gets data from about 1,000 apps, many of which need to know a phone’s location to be effective, like those providing weather forecasts, driving directions or the whereabouts of the nearest ATM. Smartphone users, wittingly or not, share their location when they use such apps. Before Thasos gets the data, suppliers scrub it of personally identifiable information, Mr. Skibiski said. It is just time-stamped strings of longitude and latitude. But with more than 100 million phones providing such coordinates, Thasos says it can paint detailed pictures of the ebb and flow of people, and thus their money.

When Tech Knows You Better Than You Know Yourself

Algorithms are kind of running where 2 billion people spend their time. Seventy percent of what people watch on YouTube is driven by recommendations from the algorithm. People think that what you’re watching on YouTube is a choice. People are sitting there, they sit there, they think, and then they choose. But that’s not true. Seventy percent of what people are watching is the recommended videos on the right hand side, which means 70 percent of 1.9 billion users, that’s more than the number of followers of Islam, about the number followers of Christianity, of what they’re looking at on YouTube for 60 minutes a day—that’s the average time people spend on YouTube. So you got 60 minutes, and 70 percent is populated by a computer. The machine is out of control.

iGen

Jean Twenge (2018)

“Born in the mid-1990s up to the mid-2000s, iGen is the first generation to spend their entire adolescence in the age of the smartphone. With social media and texting replacing other activities, iGen spends less time with their friends in person-perhaps contributing to their unprecedented levels of anxiety, depression, and loneliness.

But technology is not the only thing that makes iGen distinct from every generation before them; they are also different in how they spend their time, how they behave, and in their attitudes toward religion, sexuality, and politics. They socialise in completely new ways, reject once sacred social taboos, and want different things from their lives and careers. More than previous generations, they are obsessed with safety, focused on tolerance, and have no patience for inequality.”

Pentagon Wants to Predict Anti-Trump Protests Using Social Media Surveillance

A series of research projects, patent filings, and policy changes indicate that the Pentagon wants to use social media surveillance to quell domestic insurrection and rebellion.

The social media posts of American citizens who don’t like President Donald Trump are the focus of the latest US military-funded research. The research, funded by the US Army and co-authored by a researcher based at the West Point Military Academy, is part of a wider effort by the Trump administration to consolidate the US military’s role and influence on domestic intelligence.

The vast scale of this effort is reflected in a number of government social media surveillance patents granted this year, which relate to a spy program that the Trump administration outsourced to a private company last year. Experts interviewed by Motherboard say that the Pentagon’s new technology research may have played a role in amendments this April to the Joint Chiefs of Staff homeland defense doctrine, which widen the Pentagon’s role in providing intelligence for domestic “emergencies,” including an “insurrection.”

It’s no secret that the Pentagon has funded Big Data research into how social media surveillance can help predict large-scale population behaviours, specifically the outbreak of conflict, terrorism, and civil unrest.

Much of this research focuses on foreign theatres like the Middle East and North Africa — where the 2011 Arab Spring kicked off an arc of protest that swept across the region and toppled governments.

Since then, the Pentagon has spent millions of dollars finding patterns in posts across platforms like Facebook, Twitter, Instagram, Tumblr, and beyond to enable the prediction of major events.

But the Pentagon isn’t just interested in anticipating surprises abroad. The research also appears to be intended for use in the US homeland.

Datasets for the research were collected using the Apollo Social Sensing Tool, a real-time event tracking software that collects and analyses millions of social media posts.

The tool was originally developed under the Obama administration back in 2011 by the US Army Research Laboratory and US Defense Threat Reduction Agency, in partnership with Rensselaer Polytechnic Institute, the University of Illinois, IBM, and Caterva (a social marketing company that in 2013 was folded into a subsidiary of giant US government IT contractor, CSC). Past papers associated with the project show that the tool has been largely tested in foreign theatres like Haiti, Egypt, and Syria.

But the use of the Apollo tool to focus on protests in the US homeland has occurred under the Trump administration. The ‘election’ dataset compiled using Apollo for the 2018 US Army-funded study is comprised of 2.5 million tweets sent between October 26, 2016, and December 20, 2016, using the words “Trump”, “Clinton,” and “election.”

Tweets were geolocated to focus on “locations where protests occurred following the election” based on user profiles. Locations were then triangulated against protest data from “online news outlets across the country.”

The millions of tweets were used to make sense of the “frequencies of the protests in 39 cities” using 18 different ways of measuring the “size, structure and geography” of a network, along with two ways of measuring how that network leads a social group to become “mobilized,” or take action.

In short, this means that “the social network can be a predictor of mobilization, which in turn is a predictor of the protest.” This pivotal finding means that extensive real-time monitoring of American citizens’ social media activity can be used to predict future protests.

What Children Want to Know About Computers

When visiting a series of eight primary school class rooms recently, I talked to children (aged 5 -12 years old) about how computers work. They drew pictures of what they thought is inside a computer, and then we discussed the drawings as a class. Many of the children knew the names of the components within a computer: a chip, memory, a disc, and they were often insistent that there should be a fan in there. They knew that there would be wires inside, and that it would need a battery to make it work. The child who created drawing 1 has made a nice job of piecing together a possible design from what they knew about computers – can you spot what is missing though?

Drawing 1.

Drawing 2.

The artist of drawing 2 knows there is a chip inside (made by HP in this case!) and to their credit they know there is code too. Notice that the code is not physically located on the memory or the chip but somewhere in the wires. In general there was some puzzlement about how code related to the computer, as exemplified by the artist of drawing 3 who confessed “I know a computer is full of code and all devices. I am not sure what it looked like so I just scribbled.”

Drawing 3. 

Often, the children spent a while thinking about what is outside the computer and how information might get inside. It was quite common to see pictures in which the artist had folded the page to show this distinction but it was often a mystery how pressing a key or touching the screen might make something happen in the computer. Children who had spent time tinkering with computers at home had an advantage here: “I broke my keyboard once and I saw what was inside. It would send a signal from key to computer to the monitor”.

What the pictures, and subsequent classroom discussions told me is that the children know names of components within a computer, and possibly some isolated facts about them. None of the pictures showed accurately how the components work together to perform computation, although the children were ready and willing to reason about this with their classmates. Although some of the children had programmed in the visual programming language, none of them knew how the commands they wrote in Scratch would be executed in the hardware inside a computer. One boy, who had been learning about variables in Scratch the previous day wanted to know whether if he looked in his computer he would really see apps with boxes full of variables in them. I love that question because it reveals the mysterious boundary between intangible, invisible information and the small lump of silicon which processes it.

To be clear, I am not criticizing the children, who were curious, interested and made perfectly reasonable inferences based on the facts they picked up in their everyday lives. But I think that computer science educators can do better here. Our discipline is built upon the remarkable fact that we can write instructions in a representation which makes sense to humans and then automatically translate them into an equivalent representation which can be followed by a machine dumbly switching electrical pulses on and off. Children are not going to be able to figure that out for themselves by dissecting old computers or by making the Scratch cat dance. We need to get better at explicitly explaining this in interesting ways.

Children are currently piecing together their everyday experiences with technology with facts that adults tell them to try to make sense of how computers work. This can lead to some confusion, particularly if the adults in their lives are also unsure. One child thought, for example, that if you paid more money, then it would make Wi-Fi stronger. Others were curious about how Wi-Fi works on a train, and whether you really need to stop using your phone on a plane. A student advised the class that if we needed to save space on our phones, then we should delete videos from YouTube. The children, like most Windows users, wanted to know why their computers “freeze”, speculating that it could be because the chip is asleep or that too many people are using Wi-Fi. There was also a sense of wonderment and curiosity. A young boy was fascinated when he read about super computers and wanted to know more: do super computers have really big chips in them? A class of eleven-year-olds gravely debated whether people would be more or less clever if the computer had never been invented. These are the sorts of questions about computers which children want to explore. It’s our job as computer scientists, and as educators, to help them.

Suspicious packages spotlight vast mail surveillance system, United States

The U.S. Postal Service regularly photographs the front and back of every piece of U.S. mail, or about 150 billion parcels, envelopes, and postcards every year. A longstanding practice known as the “mail cover” program enables law enforcement to obtain address information and images of the outsides of mail as part of an investigation without the need for a warrant through the Postal Inspection Service, the U.S. Postal Service’s policing arm.

As part of the mail cover program, mail is routinely digitally photographed as part of the sorting process and even available for recipients to digitally preview in some areas. Apart from threats like bombs, the department says its main focus is on mail theft, fraud, and narcotics cases.

Because a mail cover involves reading only information on the outside of the envelope or package, courts have not ruled it a violation of the Fourth Amendment. But that hasn’t stilled concerns about privacy and abuse.

For decades, the relatively obscure program has come under criticism for its lack of protections, for allowing data to be shared in broader cases than postal regulations allow, and for operating largely outside of public view. Critics have also warned that extensive surveillance of someone’s mail, especially combined with other surveillance, could create privacy violations.

After an audit, the Postal Service inspector general determined in 2014 that the Inspection Service did not have “sufficient controls” in place to ensure that its employees followed the agency’s policies in handling national security mail covers.

“Insufficient controls over the mail covers program could hinder the Postal Inspection Service’s ability to conduct effective investigations, lead to public concerns over privacy of mail, and harm the Postal Service’s brand,” it warned in a report.

Mail covers drew fire in the 1970s as part of government surveillance of groups deemed subversive, after a 15-year-old girl was apparently put under surveillance for writing a letter to a radical group. She had reportedly actually intended to contact a similarly named organization as part of a homework assignment.

With 5G, you won’t just be watching video. It’ll be watching you, too

What happens when movies can direct themselves? Remember the last time you felt terrified during a horror movie? Take that moment, and all the suspense leading up to it, and imagine it individually calibrated for you. It’s a terror plot morphing in real time, adjusting the story to your level of attention to lull you into a comfort zone before unleashing a personally timed jumpscare.

Or maybe being scared witless isn’t your idea of fun. Think of a rom-com that stops from going off the rails when it sees you rolling your eyes. Or maybe it tweaks the eye color of that character finally finding true love so it’s closer to your own, a personalized subtlety to make the love-struck protagonist more relatable.

You can thank (or curse) 5G for that.

When most people think of 5G, they’re envisioning an ultra-fast, high-bandwidth connection that lets you download seasons of your favorite shows in minutes. But 5G’s possibilities go way beyond that, potentially reinventing how we watch video, and opening up a mess of privacy uncertainties.

“Right now you make a video much the same way you did for TV,” Dan Garraway, co-founder of interactive video company Wirewax, said in an interview this month. “The dramatic thing is when you turn video into a two-way conversation. Your audience is touching and interacting inside the experience and making things happen as a result.” The personalized horror flick or tailored rom-com? They would hinge on interactive video layers that use emotional analysis based on your phone’s front-facing camera to adjust what you’re watching in real time. You may think it’s far-fetched, but one of key traits of 5G is an ultra-responsive connection with virtually no lag, meaning the network and systems would be fast enough to react to your physical responses.

Before you cast a skeptical eye at 5G, consider how the last explosion of mobile connectivity, from 3G to 4G LTE, changed how we consumed video. Being able to watch — and in YouTube’s case, upload — video on a mobile device reimagined how we watch TV and the types of programming that are big business. A decade ago, when Netflix was about two years into its transition to streaming from DVD mailings, its annual revenue $1.4 billion. This year it’s on track for more than 10 times that ($15.806 billion).

5G’s mobility can bring video experiences to new locations. Spare gives the example straight out of Minority Report, of entering a Gap retail store and being greeted by name. But taken further, the store could develop a three-dimensional video concierge for your phone — a pseudo-hologram that helps you find what you’re looking for. With 5G’s ability to make virtual and augmented reality more accessible, you could get a snapshot of what an outfit might look like on you without having to try it on.

Where things get crazy — and creepy — is imagining how 5G enables video to react to your involuntary cues and all the data you unconsciously provide. A show could mimic the weather or time of day to more closely match the atmosphere in real life.

For all the eye-popping possibilities, 5G unleashes a tangle of privacy questions. 5G could leverage every piece of visual information a phone can see on cameras front and back in real time. This level of visual imagery collection could pave the way for video interaction to happen completely automatically.

It’s also a potential privacy nightmare. But the lure of billions of dollars have already encouraged companies to make privacy compromises.

And that may make it feel like your personalized horror show is already here.

Thousands Of Swedes Are Inserting Microchips Under Their Skin

In Sweden, a country rich with technological advancement, thousands have had microchips inserted into their hands.

The chips are designed to speed up users’ daily routines and make their lives more convenient — accessing their homes, offices and gyms is as easy as swiping their hands against digital readers.

More than 4,000 Swedes have adopted the technology, with one company, Biohax International, dominating the market. The chipping firm was started five years ago by Jowan Osterlund, a former professional body piercer.

Many early adopters come from Stockholm’s thriving startup scene. Erik Frisk, a 30-year-old Web developer and designer, says he was really curious about the technology as soon as he heard about it and decided to get his own chip in 2014.

Sweden’s largest train company has started allowing commuters to use chips instead of tickets, and there’s talk that the chips could soon be used to make payments in shops and restaurants.

Swedes are used to sharing personal information, with many online purchases and administrative bodies requiring their social security numbers. Mobile phone numbers are widely available in online search databases, and people can easily look up each other’s salaries by calling the tax authority.

Google Is Teaching Children How to Act Online. Is It the Best Role Model?

Google is on a mission to teach children how to be safe online. That is the message behind “Be Internet Awesome,” a so-called digital-citizenship education program that the technology giant developed for schools.

The lessons include a cartoon game branded with Google’s logo and blue, red, yellow and green color palette. The game is meant to help students from third grade through sixth guard against schemers, hackers and other bad actors.

But critics say the company’s recent woes — including revelations that it was developing a censored version of its search engine for the Chinese market and had tracked the whereabouts of users who had explicitly turned off their location history — should disqualify Google from promoting itself in schools as a model of proper digital conduct.

Among other things, these critics argue, the company’s lessons give children the mistaken impression that the main threat they face online is from malicious hackers and bullies, glossing over the privacy concerns that arise when tech giants like Google itself collect users’ personal information and track their actions online.

American corporate giants are no strangers to the country’s schools.

In the 1970s, General Motors circulated a free booklet in public schools that featured cartoon characters like Harry Hydrocarbon, who played down concerns about the health risks of industrial pollution and suggested that air pollution would soon not be a problem, according to a 1979 report, “Hucksters in the Classroom: A Review of Industry Propaganda in Schools.”

In the 1990s, Procter & Gamble promoted its own curriculum, “Decision: Earth,” in schools. Among other things, it instructed children that synthetic diapers were no more harmful for the environment than cloth diapers.

Around the same time, Campbell Soup sponsored a classroom kit called the “Prego Thickness Experiment.” According to a 1997 article in The New York Times, “Corporate Classrooms and Commercialism,” the kit was supposed to teach children the scientific method — by having them “prove” that Prego pasta sauce was thicker than rival Ragu.

Critics see a similar self-serving agenda with “Be Internet Awesome,” which presents malicious third parties as the primary online threat to children, while failing to teach them how to navigate corporate data-mining practices.

Silicon Valley’s dirty secret: Using a shadow workforce of contract employees to drive profits

As the gig economy grows, the ratio of contract workers to regular employees in corporate America is shifting. Google, Facebook, Amazon, Uber and other Silicon Valley tech titans now employ thousands of contract workers to do a host of functions — anything from sales and writing code to managing teams and testing products. This year at Google, contract workers outnumbered direct employees for the first time in the company’s 20-year history.

It’s not only in Silicon Valley. The trend is on the rise as public companies look for ways to trim HR costs or hire in-demand skills in a tight labor market. The U.S. jobless rate dropped to 3.7 percent in September, the lowest since 1969, down from 3.9 percent in August, according to the Bureau of Labor Statistics.

Some 57.3 million Americans, or 36 percent of the workforce, are now freelancing, according to a 2017 report by Upwork. In San Mateo and Santa Clara counties alone, there are an estimated 39,000 workers who are contracted to tech companies, according to one estimate by University of California Santa Cruz researchers.

Spokespersons at Facebook and Alphabet declined to disclose the number of contract workers they employ. A spokesperson at Alphabet cited two main reasons for hiring contract or temporary workers. One reason is when the company doesn’t have or want to build out expertise in a particular area such as doctors, food service, customer support or shuttle bus drivers. Another reason is a need for temporary workers when there is a sudden spike in workload or to cover for an employee who is on leave.