Archives February 2017

Social Media are “driving Americans insane”

“If you pull out your phone to check Twitter while waiting for the light to change, or read e-mails while brushing your teeth, you might be what the American Psychological Association calls a “constant checker.” And chances are, it’s hurting your mental health.

Last week, the APA released a study finding that Americans were experiencing the first statistically significant stress increase in the survey’s 10-year history. In January, 57 percent of respondents of all political stripes said the U.S. political climate was a very or somewhat significant source of stress, up from 52 percent who said the same thing in August. On Thursday, the APA released the second part of its 1 findings, “Stress In America: Coping With Change,” examining the role technology and social media play in American stress levels.

Social media use has skyrocketed from 7 percent of American adults in 2005 to 65 percent in 2015. For those in the 18-29 age range, the increase is larger, from 12 percent to a remarkable 90 percent. But while an increase in social media usage is hardly surprising, the number of people who just can’t tear themselves away is stark: Nowadays, 43 percent of Americans say they are checking their e-mails, texts, or social media accounts constantly. And their stress levels are paying for it: On a 10-point scale, constant checkers reported an average stress level of 5.3. For the rest of Americans, the average level is a 4.4.

If the first step toward recovery, however, is admitting there is a problem, Americans are on their way. Some 65 percent of respondents said “unplugging” or taking a “digital detox” is important. But alas, knowing you have a problem is not the same as fixing it: Only 28 percent of those Americans say they take their own advice.

Stare Into The Lights My Pretties

German watchdog tells parents to destroy Wi-Fi-connected doll

“A German government watchdog has ordered parents to “destroy” an internet-connected doll for fear it could be used as a surveillance device. According to a report from BBC News, the German Federal Network Agency said the doll (which contains a microphone and speaker) was equivalent to a “concealed transmitting device” and therefore prohibited under German telecom law.

The doll in question is “My Friend Cayla,” a toy which has already been the target of consumer complaints in the EU and US. In December last year, privacy advocates said the toy recorded kids’ conversations without proper consent, violating the Children’s Online Privacy Protection Act.

Cayla uses a microphone to listen to questions, sending this audio over Wi-Fi to a third-party company (Nuance) that converts it to text. This is then used to search the internet, allowing the doll to answer basic questions, like “What’s a baby kangaroo called?” as well as play games. In addition to privacy concerns over data collection, security researchers found that Cayla can be easily hacked. The doll’s insecure Bluetooth connection can be compromised, letting a third party record audio via the toy, or even speak to children using its voice.

Although the FTC has not yet taken any action against Cayla or its makers Manufacturer Genesis Toys, German data and privacy laws are more stringent than those in America. The legacy of the Stasi, the secret police force that set up one of the most invasive mass-surveillance regimes ever in Communist East Germany, has made the country’s legislators vigilant against such infringements.”

Stare Into The Lights My Pretties

How algorithms (secretly) run the world

“When you browse online for a new pair of shoes, pick a movie to stream on Netflix or apply for a car loan, an algorithm likely has its word to say on the outcome.

The complex mathematical formulas are playing a growing role in all walks of life: from detecting skin cancers to suggesting new Facebook friends, deciding who gets a job, how police resources are deployed, who gets insurance at what cost, or who is on a “no fly” list.

Algorithms are being used—experimentally—to write news articles from raw data, while Donald Trump’s presidential campaign was helped by behavioral marketers who used an algorithm to locate the highest concentrations of “persuadable voters.”

But while such automated tools can inject a measure of objectivity into erstwhile subjective decisions, fears are rising over the lack of transparency algorithms can entail, with pressure growing to apply standards of ethics or “accountability.”

Data scientist Cathy O’Neil cautions about “blindly trusting” formulas to determine a fair outcome.

“Algorithms are not inherently fair, because the person who builds the model defines success,” she said.

O’Neil argues that while some algorithms may be helpful, others can be nefarious. In her 2016 book, “Weapons of Math Destruction,” she cites some troubling examples in the United States:

  • Public schools in Washington DC in 2010 fired more than 200 teachers—including several well-respected instructors—based on scores in an algorithmic formula which evaluated performance.
  • A man diagnosed with bipolar disorder was rejected for employment at seven major retailers after a third-party “personality” test deemed him a high risk based on its algorithmic classification.
  • Many jurisdictions are using “predictive policing” to shift resources to likely “hot spots.” O’Neill says that depending on how data is fed into the system, this could lead to discovery of more minor crimes and a “feedback loop” which stigmatizes poor communities.
  • Some courts rely on computer-ranked formulas to determine jail sentences and parole, which may discriminate against minorities by taking into account “risk” factors such as their neighborhoods and friend or family links to crime.
  • In the world of finance, brokers “scrape” data from online and other sources in new ways to make decisions on credit or insurance. This too often amplifies prejudice against the disadvantaged, O’Neil argues.

Her findings were echoed in a White House report last year warning that algorithmic systems “are not infallible—they rely on the imperfect inputs, logic, probability, and people who design them.”

Stare Into The Lights My Pretties

“Your browsing history alone can give away your identity”

“Researchers at Stanford and Princeton universities have found a way to connect the dots between people’s private online activity and their Twitter accounts—even for people who have never tweeted.

When the team tested the technique on 400 real people who submitted their browsing history, they were able to correctly pick out the volunteers’ Twitter profiles nearly three-quarters of the time.

Here’s how the de-anonymization system works: The researchers figured that a person is more likely to click a link that was shared on social media by a friend—or a friend of a friend—than any other random link on the internet. (Their model controls for the baseline popularity of each website.) With that in mind, and the details of an anonymous person’s browser history in hand, the researchers can compute the probability that any one Twitter user created that browsing history. People’s basic tendency to follow links they come across on Twitter unmasks them—and it usually takes less than a minute.

“You can even be de-anonymized if you just browse and follow people, without actually sharing anything.”