Archives May 2018

Google plan for data-driven ‘smart city’ sparks privacy, democracy concerns

In the Canadian city of Toronto, city officials are negotiating a project that will give a section of the city’s waterfront to the US tech giant Google. Under the arrangement, Google affiliate Sidewalk Labs will build and run a high-tech “radical mixed-use” site called Quayside. This “smart city” plan involves creating a neighbourhood “from the internet up”, powered and served by data; with sensors monitoring everything from air quality to pedestrian traffic, even the flushing of toilets. Amenities like garbage disposal and goods delivery are to be coordinated and driven by AI and robotics.

The proposed parcel of land isn’t huge, but it’s not insubstantial either – it covers about half-a-square-kilometre, and there are already suggestions it could be extended.

For Eric Schmidt, executive chairman of Alphabet — the parent company of both Google and Sidewalk Labs — it’s the culmination of a long-held ambition.

“Give us a city and put us in charge,” he once famously declared.

Following the Facebook/Cambridge Analytica scandal, some, like Dr Jathan Sadowski at the University of Sydney, worry about the implications of putting a private tech company in charge of both urban development and urban life.

“What’s in it for them? It’s data,” he says. “It allows them to get really massive amounts of granular data about urban life and urban environments.”

“You’ll have a city that’s based on, or built around, proprietary platforms, data harvesting, corporate control.”

Screen watching at all-time high

With Netflix and Amazon Prime, Facebook Video and YouTube, it’s tempting to imagine that the tech industry destroyed TV. The world is more than 25 years into the web era, after all, more than half of American households have had home Internet for 15 years, and the current smartphone paradigm began more than a decade ago. But no. Americans still watch an absolutely astounding amount of traditional television.

In fact, television viewing didn’t peak until 2009-2010, when the average American household watched 8 hours and 55 minutes of TV per day. And the ’00s saw the greatest growth in TV viewing time of any decade since Nielsen began keeping track in 1949-1950: Americans watched 1 hour and 23 minutes more television at the end of the decade than at the beginning. Run the numbers and you’ll find that 32 percent of the increase in viewing time from the birth of television to its peak occurred in the first years of the 21st century.

Over the last 8 years, all the new, non-TV things — Facebook, phones, YouTube, Netflix — have only cut about an hour per day from the dizzying amount of TV that the average household watches. Americans are still watching more than 7 hours and 50 minutes per household per day.

Can An Individual Still Resist The Spread of Technology?

When cellphones first appeared, they gave people one more means of communication, which they could accept or reject. But before long, most of us began to feel naked and panicky anytime we left home without one. To do without a cellphone — and soon, if not already, a smartphone — means estranging oneself from normal society. We went from “you can have a portable communication device” to “you must have a portable communication device” practically overnight… Today most people are expected to be instantly reachable at all times. These devices have gone from servants to masters…

Few of us would be willing to give up modern shelter, food, clothing, medicine, entertainment or transportation. Most of us would say the trade-offs are more than worth it. But they happen whether they are worth it or not, and the individual has little power to resist. Technological innovation is a one-way street. Once you enter it, you are obligated to proceed, even if it leads someplace you would not have chosen to go.

The column argues “the iPhone X proves the Unabomber was right,” citing this passage from the 1996 manifesto of the anti-technology terrorist. “Once a technical innovation has been introduced, people usually become dependent on it, so that they can never again do without it, unless it is replaced by some still more advanced innovation. Not only do people become dependent as individuals on a new item of technology, but, even more, the system as a whole becomes dependent on it.”

Targeted advertising hits emergency rooms

Patients sitting in emergency rooms, at chiropractors’ offices and at pain clinics in the Philadelphia area may start noticing on their phones the kind of messages typically seen along highway billboards and public transit: personal injury law firms looking for business by casting mobile online ads at patients.

The potentially creepy part? They’re only getting fed the ad because somebody knows they are in an emergency room.

The technology behind the ads, known as geofencing, or placing a digital perimeter around a specific location, has been deployed by retailers for years to offer coupons and special offers to customers as they shop. Bringing it into health care spaces, however, is raising alarm among privacy experts.

High School in China Installs Facial Recognition Cameras to Monitor Students’ Attentiveness

A high school in Hangzhou City, Zhejiang Province located on the eastern coast of China, has employed facial recognition technology to monitor students’ attentiveness in class.

At Hangzhou Number 11 High School, three cameras at the front of the classroom scan students’ faces every 30 seconds, analyzing their facial expressions to detect their mood, according to a May 16 report in the state-run newspaper The Paper.

The different moods—surprised, sad, antipathy, angry, happy, afraid, neutral—are recorded and averaged during each class.

A display screen, only visible to the teacher, shows the data in real-time. A certain value is determined as a student not paying enough attention.

A video shot by Zhejiang Daily Press revealed that the system—coined the “smart classroom behavior management system” by the school—also analyzes students’ actions, categorized into: reading, listening, writing, standing up, raising hands, and leaning on the desk.

An electronic screen also displays a list of student names deemed “not paying attention.”

The school began using the technology at the end of March, vice principal Zhang Guanchao told The Paper. Zhang added that students felt like they were being monitored when the system was first put in place, but have since gotten used to it.

How the “Math Men” Overthrew the “Mad Men”

Once, Mad Men ruled advertising. They’ve now been eclipsed by Math Men — the engineers and data scientists whose province is machines, algorithms, pureed data, and artificial intelligence. Yet Math Men are beleaguered, as Mark Zuckerberg demonstrated when he humbled himself before Congress, in April. Math Men’s adoration of data — coupled with their truculence and an arrogant conviction that their ‘science’ is nearly flawless — has aroused government anger, much as Microsoft did two decades ago.

The power of Math Men is awesome. Google and Facebook each has a market value exceeding the combined value of the six largest advertising and marketing holding companies. Together, they claim six out of every ten dollars spent on digital advertising, and nine out of ten new digital ad dollars. They have become more dominant in what is estimated to be an up to two-trillion-dollar annual global advertising and marketing business. Facebook alone generates more ad dollars than all of America’s newspapers, and Google has twice the ad revenues of Facebook.

Bitcoin driving huge electricity demand, environmental impact

In a normal year, demand for electric power in Chelan County grows by perhaps 4 megawatts ­­— enough for around 2,250 homes — as new residents arrive and as businesses start or expand. But since January 2017, as Bitcoin enthusiasts bid up the price of the currency, eager miners have requested a staggering 210 megawatts for mines they want to build in Chelan County. That’s nearly as much as the county and its 73,000 residents were already using. And because it is a public utility, the PUD staff is obligated to consider every request.

The scale of some new requests is mind-boggling. Until recently, the largest mines in Chelan County used five megawatts or less. In the past six months, by contrast, miners have requested loads of 50 megawatts and, in several cases, 100 megawatts. By comparison, a fruit warehouse uses around 2.5 megawatts.

Facebook accused of conducting mass surveillance through its apps

Facebook used its apps to gather information about users and their friends, including some who had not signed up to the social network, reading their text messages, tracking their locations and accessing photos on their phones, a court case in California alleges.

The claims of what would amount to mass surveillance are part of a lawsuit brought against the company by the former startup Six4Three, listed in legal documents filed at the superior court in San Mateo as part of a court case that has been ongoing for more than two years.

It alleges that Facebook used a range of methods, some adapted to the different phones that users carried, to collect information it could use for commercial purposes.

“Facebook continued to explore and implement ways to track users’ location, to track and read their texts, to access and record their microphones on their phones, to track and monitor their usage of competitive apps on their phones, and to track and monitor their calls,” one court document says.

But all details about the mass surveillance scheme have been redacted on Facebook’s request in Six4Three’s most recent filings. Facebook claims these are confidential business matters.

Other alleged projects included one to remotely activate Bluetooth, allowing the company to pinpoint a user’s location without them explicitly agreeing to it. Another involved the development of privacy settings with an early end date that was not flagged to users, letting them expire without notice, the court documents claim.

Facebook admitted recently that it had collected call and text message data from users, but said it only did so with prior consent. However the Guardian has reported that it logged some messages without explicitly notifying users. The company could not see text messages for iPhone users but could access any photos taken on a phone or stored on the built-in “camera roll” archive system, the court case alleged. It has not disclosed how they were analysed.

Facebook has not fully disclosed the manner in which it pre-processes photos on the iOS camera roll, meaning if a user has any Facebook app installed on their iPhone, then Facebook accesses and analyses the photos the user takes and/or stores on the iPhone, the complainant alleges.

New York high school will use CCTV and facial recognition to enforce discipline

Next year, high schools in Lockport New York will use the “Aegis” CCTV and facial recognition system to track and record the interactions of students suspected of code of conduct violations, keeping a ledger of who speaks to whom, where, and for how long.

The record will be used to assemble evidence against students and identify possible accomplices to ascribe guilt to.

Lockport Superintendent Michelle T. Bradley justified the decision by noting, “We always have to be on our guard. We can’t let our guard down.”

Lockport will be the first school district in the world to subject its students to this kind of surveillance. The program will cost $1.4m in state money. The technology supplier is SN Technologies of Ganonoque, Ont., one of the companies in the vicinity of Kingston, Ontario, home to the majority of the province’s detention centers.

The Lockport district says that the system will make students safer by alerting officials if someone on a sex-offender registry or terrorist watchlist enters the property. None of America’s school shootings or high-profile serial sex abuse scandals were carried out by wanted terrorists or people on the sex-offender registry.

Deployed law-enforcement facial recognition systems have failure rates of 98%. The vendor responsible for Aegis would not disclose how they improved on the state of the art, but insisted that their product worked “99.97% of the time.” The spokesperson would not disclose any of the workings of the system, seemingly believing that doing so was antithetical to security.

London cops are using an unregulated, 98% inaccurate facial recognition tech

The London Metropolitan Police use a facial recognition system whose alerts have a 98% false positive rate; people falsely identified by the system are stopped, questioned and treated with suspicion.

The UK has a biometrics commissioner, Professor Paul Wiles, who laments the lack of any regulation of this technology, calling it “urgently needed”; these regulations are long promised, incredibly overdue, and the Home Office admits that they’re likely to be delayed beyond their revised June publication date.

The Met say that they don’t “arrest” people who are erroneously identified by the system. Rather, they “detain” them by refusing to allow them to leave and subjecting them to searches, etc.

Incredibly, the Met’s system is even worse than the South Wales Police’s facial recognition system, which has a comparatively impressive 92% failure rate.

Researchers Create First Flying Wireless Robotic Insect

You might remember RoboBee, an insect-sized robot that flies by flapping its wings. Unfortunately, though, it has to be hard-wired to a power source. Well, one of RoboBee’s creators has now helped develop RoboFly, which flies without a tether. Slightly heavier than a toothpick, RoboFly was designed by a team at the University of Washington — one member of that team, assistant professor Sawyer Fuller, was also part of the Harvard University team that first created RoboBee. That flying robot receives its power via a wire attached to an external power source, as an onboard battery would simply be too heavy to allow the tiny craft to fly. Instead of a wire or a battery, RoboFly is powered by a laser. That laser shines on a photovoltaic cell, which is mounted on top of the robot. On its own, that cell converts the laser light to just seven volts of electricity, so a built-in circuit boosts that to the 240 volts needed to flap the wings. That circuit also contains a microcontroller, which tells the robot when and how to flap its wings — on RoboBee, that sort of “thinking” is handled via a tether-linked external controller.

A quarter of Americans spend all day inside

About 25 percent of Americans hardly ever venture outside, unaware or unconcerned about breathing only stale indoor air, a report says.

In an age when nearly everything can be found (and delivered) online — including food, entertainment and relationships — it’s hardly surprising to discover an “indoor generation.”

According to the federal Bureau of Labor Statistics, a nine-hour workday is the average for American wage-earners. When they return home on a typical day, 85 percent of women and 67 percent of men spend time doing work around the house.

Leisure time has become synonymous with television viewing, according to the federal data. Many Americans spend nearly three hours a day in front of the tube, and teenagers spend more than half of their leisure time with screens.

… Researchers surveyed 16,000 people from 14 countries in Europe and North America about their knowledge and perceptions of indoor/outdoor air quality and the amount of time they spend inside.

For Americans, one-quarter said they spend 21 to 24 hours inside daily, 20 percent said they spend 19 to 20 hours inside and 21 percent say they spend 15 to 18 hours inside.

Thirty-four percent said they spend zero to 14 hours inside.

Other countries with similar results to the U.S. were Britain and Canada, with 23 percent and 26 percent of their respondents respectively saying they spend 21 to 24 hours indoors.

Countries with the highest percentage of people who spend the least amount of time inside were Italy (57 percent), the Czech Republic (57 percent) and the Netherlands (51 percent). This group said they spend zero to 14 hours indoors.

Why the Facebook ‘scandal’ impacts you more than you think

It’s not just the data you choose to share.

By now we all know the story: Facebook allowed apps on its social media platform which enabled a shady outfit called Cambridge Analytica to scrape the profiles of 87 million users, in order to serve up targeted ads to benefit the Trump election campaign in 2016.  More than 300,000 Australian users of Facebook were caught up in the data harvesting.

But serving up ads in a foreign election campaign is not the whole story.  Facebook, and other companies involved in data mining, are invading our privacy and harming us economically and socially, in ways that are only just starting to become clear.

And it’s not just the data you choose to share. The information you post is not the whole story.  It’s only the tip of the iceberg of data that Facebook has collected about you.

Every time you go online you leave a trail of digital breadcrumbs.  Facebook has been busily sweeping up those breadcrumbs, and using them to categorise and profile you.  Facebook obviously knows when you click on a Facebook ‘like’ button; but also, unless a web developer has gone out of their way to find tools to block them (as we have done for our Salinger Privacy blog), Facebook knows every time you simply look at a website that has a Facebook ‘like’ button somewhere on it.

So if you only post or ‘like’ stories about inspirational mountain climbers and funny cat videos, but also do things online that you don’t share with your family, friends or work colleagues (like looking at stories about abortion or dealing with infidelity, Googling how to manage anxiety or erectile dysfunction, whingeing about your employer in a chatroom, or spending hours reviewing dating profiles, gambling or shopping obsessively for shoes)  — Facebook has you pegged anyway.

Plus, Facebook obtains data from other sources which know about your offline purchases, to build an even richer picture of who you really are.  And of course, Facebook may have access to your address book, your location history, the contents of your private messages, and depending on your brand of phone, possibly even a history of your phone calls and text messages.

All that information is used to draw inferences and assumptions about your preferences, and predict your likely behaviour.  The results are then used to categorise, profile and ultimately target you, in a process usually described as ‘online behavioural advertising’.

It’s not ‘just ads’

The objective of online behavioural advertising is to predict your purchasing interests and drive a purchase decision.  So far, the same as any other advertising.  But online, the implications for us as individuals are much greater.

Facebook’s promise to advertisers is that it can show their ad to exactly who the advertiser wants, and exclude everybody else.

However, by allowing exclusion, the platform also allows discrimination.  Facebook has been caught allowing advertisers to target — and exclude — people on the basis of their ‘ethnic affinity’, amongst other social, demographic, racial and religious characteristics.  So a landlord with an ad for rental housing could prevent people profiled as ‘single mothers’ from ever seeing their ad.  An employer could prevent people identifying as Jewish from seeing a job ad.  A bank could prevent people categorised as African Americans from seeing an ad for a home loan.

Existing patterns of social exclusion, economic inequality and discrimination are further entrenched by micro-targeted advertising, which is hidden from public view and regulatory scrutiny.

Data boy. Mark Zuckerberg testifies in Washington. Image: Getty.

Predictive analytics can narrow or alter your life choices

Once we move beyond straight-up advertising and into predictive analytics, the impact on individual autonomy becomes more acute.  Big Data feeds machine learning, which finds patterns in the data, from which new rules (algorithms) are designed.  Algorithms predict how a person will behave, and suggest how they should be treated.

Algorithms can lead to price discrimination, like surge pricing based on Uber knowing how much phone battery life you have left.  Or market exclusion, like Woolworths only offering car insurance to customers it has decided are low risk, based on an assessment of the groceries they buy.

Banks have been predicting the risk of a borrower defaulting on a loan for decades, but now algorithms are also used to determine who to hire, predict when a customer is pregnant, and deliver targeted search results to influence how you vote.

Algorithms are also being used to predict the students at risk of failure, the prisoners at risk of re-offending, and who is at risk of suicide and then launching interventions accordingly.  However, even leaving aside the accuracy of those predictions, interventions are not necessarily well-intentioned.  It was revealed last year that Australian Facebook executives were touting to advertisers their ability to target psychologically vulnerable teenagers. 

Automated decision-making diminishes our autonomy, by narrowing or altering our market and life choices, in ways that are not clear to us.  People already in a position of economic or social disadvantage face the additional challenge of trying to disprove or beat an invisible algorithm.

In a predictive and pre-emptive world, empathy, forgiveness, rehabilitation, redemption, individual dignity, autonomy and free will are programmed out of our society.

Fiddling with users’ privacy settings on Facebook won’t fix anything.  If we want our lives to be ruled by human values and individual dignity, instead of by machines fed on questionable data, we need robust, enforced and globally effective privacy laws.

A new European privacy law commences later this month.  The obligations include that businesses and governments must offer understandable explanations of how their algorithms work, and allow people to seek human review of automated decision-making.  This is a step in the right direction, which Australia, the US and the rest of the world should follow.

Google hasn’t stopped reading your e-mails

If you’re a Gmail user, your messages and emails likely aren’t as private as you’d think. Google reads each and every one (even if you definitely don’t), scanning your painfully long email chains and vacation responders in order to collect more data on you. Google uses the data gleaned from your messages in order to inform a whole host of other products and services, NBC News reported Thursday.

Though Google announced that it would stop using consumer Gmail content for ad personalization last July, the language permitting it to do so is still included in its current privacy policy, and it without a doubt still scans users emails for other purposes. Aaron Stein, a Google spokesperson, told NBC that Google also automatically extracts keyword data from users’ Gmail accounts, which is then fed into machine learning programs and other products within the Google family. Stein told NBC that Google also “may analyze [email] content to customize search results, better detect spam and malware,” a practice the company first announced back in 2012.

“We collect information about the services that you use and how you use them…” says Google’s privacy policy. “This includes information like your usage data and preferences, Gmail messages, G+ profile, photos, videos, browsing history, map searches, docs, or other Google-hosted content. Our automated systems analyze this information as it is sent and received and when it is stored.”

While Google doesn’t sell this information to third parties, has used it to power its own advertising network and inform search results, among other things. And this is far from a closely guarded secret. The company has included disclosures relating to these practices in its privacy policy since at least 2012: “When you share information with us, for example by creating a Google Account, we can make those services even better – to show you more relevant search results and ads…,” says Google’s March 2012 privacy policy.

Stare Into The Lights My Pretties

Social media copies gambling methods to create psychological cravings

Social media platforms are using the same techniques as gambling firms to create psychological dependencies and ingrain their products in the lives of their users, experts warn.

These methods are so effective they can activate similar mechanisms as cocaine in the brain, create psychological cravings and even invoke “phantom calls and notifications” where users sense the buzz of a smartphone, even when it isn’t really there.

“Facebook, Twitter and other companies use methods similar to the gambling industry to keep users on their sites,” said Natasha Schüll, the author of Addiction by Design, which reported how slot machines and other systems are designed to lock users into a cycle of addiction. “In the online economy, revenue is a function of continuous consumer attention – which is measured in clicks and time spent.”

Whether it’s Snapchat streaks, Facebook photo-scrolling, or playing CandyCrush, Schüll explained, you get drawn into “ludic loops” or repeated cycles of uncertainty, anticipation and feedback — and the rewards are just enough to keep you going.

“If you disengage, you get peppered with little messages or bonus offers to get your attention and pull you back in,” said Schüll. “We have to start recognising the costs of time spent on social media. It’s not just a game – it affects us financially, physically and emotionally.”

Recreating the slot machine

The pull-to-refresh and infinite scrolling mechanism on our news feeds are unnervingly similar to a slot machine, said Tristan Harris, a former design ethicist for Google who has been described as the closest thing Silicon Valley has to a conscience.

“You pull a lever and immediately receive either an enticing reward (a match, a prize!) or nothing,” Harris wrote.

We cannot know when we will be rewarded, and more often than not we don’t find anything interesting or gratifying, much like gambling. But that’s precisely what keeps us coming back.

“The rewards are what psychologists refer to as variable reinforcement schedules and is the key to social media users repeatedly checking their screens,” said Dr Mark Griffiths, a professor of behavioural addiction and director of Nottingham Trent University’s International Gaming Research Unit.

“Social media sites are chock-a-block with unpredictable rewards. They are trying to grab users’ attentions … to make social media users create a routine and habitually check their screens.”

Like gambling, which physically alters the brain’s structure and makes people more susceptible to depression and anxiety, social media use has been linked to depression and its potential to have an adverse psychological impact on users cannot be overlooked or underestimated.

For instance, phone dependency, driven by high social-media usage, can lead us to think our phone is vibrating, or that we have received a message, even when we haven’t.

“Phantom calls and notifications are linked to our psychological craving for such signals,” said Professor Daniel Kruger, an expert in human behaviour, from the University of Michigan. “These social media messages can activate the same brain mechanisms as cocaine [does] and this is just one of the ways to identify those mechanisms because our minds are a physiological product of our brain.”

“There are whole departments trying to design their systems to be as addictive as possible. They want you to be permanently online and by bombarding you with messages and stimuli try to redirect your attention back to their app or webpage.”

Tech insiders have previously said “our minds can be hijacked” and that Silicon Valley is addicting us to our phones, while some have confessed they ban their kids from using social media.

However, the number of monthly active users of Facebook hit 2.13 billion earlier this year, up 14% from a year ago. Despite the furore around its data privacy issues, the social media monolith posted record revenues for the first quarter of 2018, making $11.97bn, up 49% on last year.

A key reason for this is because Facebook has become so entrenched in our lives: we can’t put it down.

Behavioural psychologist, Nir Eyal, the author of Hooked: How to Build Habit-Forming Products, has conceptualised how people become attached to social media.

“It starts with a trigger, an action, a reward and then an investment and its through successive cycles, through these hooks, that habits are formed. We see them in all sorts of products, certainly in social media and gambling. This is a big part of how habits are changed.”

Once a habit is formed something previously prompted by an external trigger, like a notification, email, or any sort of ring or ding, is no longer needed, Eyal remarked.

It is replaced or supplemented with an internal trigger meaning that we form a mental association between wanting to use this product and seeking to serve an emotional need.

“The products are built to be engaging and what’s engaging for some is addictive for others, that’s clear.”