Archives November 2017

With teen mental health deteriorating over five years, screens a likely culprit

Jean Twenge, Professor of Psychology at the San Diego State University, writes:

In just the five years between 2010 and 2015, the number of U.S. teens who felt useless and joyless–classic symptoms of depression–surged 33 percent in large national surveys. Teen suicide attempts increased 23 percent. Even more troubling, the number of 13-to-18-year-olds who committed suicide jumped 31 percent.

In a new paper published in Clinical Psychological Science, my colleagues and I found that the increases in depression, suicide attempts and suicide appeared among teens from every background–more privileged and less privileged, across all races and ethnicities and in every region of the country. All told, our analysis found that the generation of teens I call “iGen” (those born after 1995) is much more likely to experience mental health issues than their millennial predecessors.

Teens now spend much less time interacting with their friends in person. Feeling socially isolated is also one of the major risk factors for suicide. We found that teens who spent more time than average online and less time than average with friends in person were the most likely to be depressed. Since 2012, that’s what has occurred en masse: Teens have spent less time on activities known to benefit mental health (in-person social interaction) and more time on activities that may harm it (time online).

Teens are also sleeping less, and teens who spend more time on their phones are more likely to not be getting enough sleep. Not sleeping enough is a major risk factor for depression, so if smartphones are causing less sleep, that alone could explain why depression and suicide increased so suddenly.

But some vulnerable teens who would otherwise not have had mental health issues may have slipped into depression due to too much screen time, not enough face-to-face social interaction, inadequate sleep or a combination of all three.

It might be argued that it’s too soon to recommend less screen time, given that the research isn’t completely definitive. However, the downside to limiting screen time – say, to two hours a day or less – is minimal. In contrast, the downside to doing nothing – given the possible consequences of depression and suicide – seems, to me, quite high.

It’s not too early to think about limiting screen time; let’s hope it’s not too late.

Over 400 of the World’s Most Popular Websites Record Your Every Keystroke

The idea of websites tracking users isn’t new, but research from Princeton University released last week indicates that online tracking is far more invasive than most users understand.

In the first installment of a series titled “No Boundaries,” three researchers from Princeton’s Center for Information Technology Policy (CITP) explain how third-party scripts that run on many of the world’s most popular websites track your every keystroke and then send that information to a third-party server.

Some highly-trafficked sites run software that records every time you click and every word you type. If you go to a website, begin to fill out a form, and then abandon it, every letter you entered in is still recorded, according to the researchers’ findings. If you accidentally paste something into a form that was copied to your clipboard, it’s also recorded. These scripts, or bits of code that websites run, are called “session replay” scripts. Session replay scripts are used by companies to gain insight into how their customers are using their sites and to identify confusing webpages. But the scripts don’t just aggregate general statistics, they record and are capable of playing back individual browsing sessions.

The scripts don’t run on every page, but are often placed on pages where users input sensitive information, like passwords and medical conditions. Most troubling is that the information session replay scripts collect can’t “reasonably be expected to be kept anonymous,” according to the researchers.

Stare Into The Lights My Pretties

You may be sick of worrying about online privacy, but ‘surveillance apathy’ is also a problem

Siobhan Lyons, Scholar in Media and Cultural Studies, Macquarie University, writes in The Conversation:

We all seem worried about privacy. Though it’s not only privacy itself we should be concerned about: it’s also our attitudes towards privacy that are important.

When we stop caring about our digital privacy, we witness surveillance apathy.

And it’s something that may be particularly significant for marginalised communities, who feel they hold no power to navigate or negotiate fair use of digital technologies.

In the wake of the NSA leaks in 2013 led by Edward Snowden, we are more aware of the machinations of online companies such as Facebook and Google. Yet research shows some of us are apathetic when it comes to online surveillance.

Privacy and surveillance

Attitudes to privacy and surveillance in Australia are complex.

According to a major 2017 privacy survey, around 70% of us are more concerned about privacy than we were five years ago.

And yet we still increasingly embrace online activities. A 2017 report on social media conducted by search marketing firm Sensis showed that almost 80% of internet users in Australia now have a social media profile, an increase of around ten points from 2016. The data also showed that Australians are on their accounts more frequently than ever before.

Also, most Australians appear not to be concerned about recently proposed implementation of facial recognition technology. Only around one in three (32% of 1,486) respondents to a Roy Morgan study expressed worries about having their faces available on a mass database.

A recent ANU poll revealed a similar sentiment, with recent data retention laws supported by two thirds of Australians.

So while we’re aware of the issues with surveillance, we aren’t necessarily doing anything about it, or we’re prepared to make compromises when we perceive our safety is at stake.

Across the world, attitudes to surveillance vary. Around half of Americans polled in 2013 found mass surveillance acceptable. France, Britain and the Philippines appeared more tolerant of mass surveillance compared to Sweden, Spain, and Germany, according to 2015 Amnesty International data.

Apathy and marginalisation

In 2015, philosopher Slavoj Žižek proclaimed that he did not care about surveillance (admittedly though suggesting that “perhaps here I preach arrogance”).

This position cannot be assumed by all members of society. Australian academic Kate Crawford argues the impact of data mining and surveillance is more significant for marginalised communities, including people of different races, genders and socioeconomic backgrounds. American academics Shoshana Magnet and Kelley Gates agree, writing:

[…] new surveillance technologies are regularly tested on marginalised communities that are unable to resist their intrusion.

A 2015 White House report found that big data can be used to perpetuate price discrimination among people of different backgrounds. It showed how data surveillance “could be used to hide more explicit forms of discrimination”.

According to Ira Rubinstein, a senior fellow at New York University’s Information Law Institute, ignorance and cynicism are often behind surveillance apathy. Users are either ignorant of the complex infrastructure of surveillance, or they believe they are simply unable to avoid it.

As the White House report stated, consumers “have very little knowledge” about how data is used in conjunction with differential pricing.

So in contrast to the oppressive panopticon (a circular prison with a central watchtower) as envisioned by philosopher Jeremy Bentham, we have what Siva Vaidhyanathan calls the “crytopticon”. The crytopticon is “not supposed to be intrusive or obvious. Its scale, its ubiquity, even its very existence, are supposed to go unnoticed”.

But Melanie Taylor, lead artist of the computer game Orwell (which puts players in the role of surveillance) noted that many simply remain indifferent despite heightened awareness:

That’s the really scary part: that Snowden revealed all this, and maybe nobody really cared.

The Facebook trap

Surveillance apathy can be linked to people’s dependence on “the system”. As one of my media students pointed out, no matter how much awareness users have regarding their social media surveillance, invariably people will continue using these platforms. This is because they are convenient, practical, and “we are creatures of habit”.

As University of Melbourne scholar Suelette Dreyfus noted in a Four Corners report on Facebook:

Facebook has very cleverly figured out how to wrap itself around our lives. It’s the family photo album. It’s your messaging to your friends. It’s your daily diary. It’s your contact list.

This, along with the complex algorithms Facebook and Google use to collect and use data to produce “filter bubbles” or “you loops” is another issue.

Protecting privacy

While some people are attempting to delete themselves from the network, others have come up with ways to avoid being tracked online.

Search engines such as DuckDuckGo or Tor Browser allow users to browse without being tracked. Lightbeam, meanwhile, allows users to see how their information is being tracked by third party companies. And MIT devised a system to show people the metadata of their emails, called Immersion.

Surveillance apathy is more disconcerting than surveillance itself. Our very attitudes about privacy will inform the structure of surveillance itself, so caring about it is paramount.

How Facebook Figures Out Everyone You’ve Ever Met

From Slashdot:

“I deleted Facebook after it recommended as People You May Know a man who was defense counsel on one of my cases. We had only communicated through my work email, which is not connected to my Facebook, which convinced me Facebook was scanning my work email,” an attorney told Gizmodo. Kashmir Hill, a reporter at the news outlet, who recently documented how Facebook figured out a connection between her and a family member she did not know existed, shares several more instances others have reported and explains how Facebook gathers information. She reports:

Behind the Facebook profile you’ve built for yourself is another one, a shadow profile, built from the inboxes and smartphones of other Facebook users. Contact information you’ve never given the network gets associated with your account, making it easier for Facebook to more completely map your social connections. Because shadow-profile connections happen inside Facebook’s algorithmic black box, people can’t see how deep the data-mining of their lives truly is, until an uncanny recommendation pops up. Facebook isn’t scanning the work email of the attorney above. But it likely has her work email address on file, even if she never gave it to Facebook herself. If anyone who has the lawyer’s address in their contacts has chosen to share it with Facebook, the company can link her to anyone else who has it, such as the defense counsel in one of her cases. Facebook will not confirm how it makes specific People You May Know connections, and a Facebook spokesperson suggested that there could be other plausible explanations for most of those examples — “mutual friendships,” or people being “in the same city/network.” The spokesperson did say that of the stories on the list, the lawyer was the likeliest case for a shadow-profile connection. Handing over address books is one of the first steps Facebook asks people to take when they initially sign up, so that they can “Find Friends.”

The problem with all this, Hill writes, is that Facebook doesn’t explicitly say the scale at which it would be using the contact information it gleans from a user’s address book. Furthermore, most people are not aware that Facebook is using contact information taken from their phones for these purposes.”

The Seemingly Pervasive Sinister Side of Algorythmic Screen Time for Children

Writer and artist James Bridle writes in Medium:

“Someone or something or some combination of people and things is using YouTube to systematically frighten, traumatize, and abuse children, automatically and at scale, and it forces me to question my own beliefs about the internet, at every level.

To begin: Kid’s YouTube is definitely and markedly weird. I’ve been aware of its weirdness for some time. Last year, there were a number of articles posted about the Surprise Egg craze. Surprise Eggs videos depict, often at excruciating length, the process of unwrapping Kinder and other egg toys. That’s it, but kids are captivated by them. There are thousands and thousands of these videos and thousands and thousands, if not millions, of children watching them. […] What I find somewhat disturbing about the proliferation of even (relatively) normal kids videos is the impossibility of determining the degree of automation which is at work here; how to parse out the gap between human and machine.”

Sapna Maheshwari also explores in The New York Times:

“Parents and children have flocked to Google-owned YouTube Kids since it was introduced in early 2015. The app’s more than 11 million weekly viewers are drawn in by its seemingly infinite supply of clips, including those from popular shows by Disney and Nickelodeon, and the knowledge that the app is supposed to contain only child-friendly content that has been automatically filtered from the main YouTube site. But the app contains dark corners, too, as videos that are disturbing for children slip past its filters, either by mistake or because bad actors have found ways to fool the YouTube Kids algorithms. In recent months, parents like Ms. Burns have complained that their children have been shown videos with well-known characters in violent or lewd situations and other clips with disturbing imagery, sometimes set to nursery rhymes.”

Very horrible and creepy.