Archives 20 September 2018

We hold people with power to account. Why not algorithms?

Robert Jones was driving home through the pretty town of Todmorden, in West Yorkshire, when he noticed the fuel light flashing on the dashboard of his car. He had just a few miles to find a petrol station, which was cutting things rather fine, but thankfully his GPS seemed to have found a short cut – sending him on a narrow winding path up the side of the valley.

Robert followed the machine’s instructions, but as he drove, the road got steeper and narrower. After a couple of miles, it turned into a dirt track, but Robert wasn’t fazed. After all, he thought, he had “no reason not to trust the satnav”.

Just a short while later, anyone who happened to be looking up from the valley below would have seen the nose of Robert’s BMW appearing over the brink of the cliff above, saved from the 100ft drop only by the flimsy wooden fence at the edge he had just crashed into. “It kept insisting the path was a road,” he told the Halifax Courier after the incident. “So I just trusted it. You don’t expect to be taken nearly over a cliff.”

I can imagine Robert was left red-faced by his blunder, but in a way, I think he’s in good company. When it comes to placing blind faith in a piece of software, his mistake was one we’re almost all capable of making. In our urge to automate, in our eagerness to adopt the latest innovations, we appear to have developed a habit of unthinkingly handing over power to machines.

All around us, algorithms provide a kind of convenient source of authority: an easy way to delegate responsibility, a short cut we take without thinking. Who is really going to click through to the second page of Google results every time and think critically about the information that has been served up? Or go to every airline to check if a comparison site is listing the cheapest deals? Or get out a ruler and a road map to confirm that their GPS is offering the shortest route?

But already in our hospitals, our schools, our shops, our courtrooms and our police stations, artificial intelligence is silently working behind the scenes, feeding on our data and making decisions on our behalf. Sure, this technology has the capacity for enormous social good – it can help us diagnose breast cancer, catch serial killers, avoid plane crashes and, as the health secretary, Matt Hancock, has proposed, potentially save lives using NHS data and genomics. Unless we know when to trust our own instincts over the output of a piece of software, however, it also brings the potential for disruption, injustice and unfairness.

If we permit flawed machines to make life-changing decisions on our behalf – by allowing them to pinpoint a murder suspect, to diagnose a condition or take over the wheel of a car – we have to think carefully about what happens when things go wrong.

Back in 2012, a group of 16 Idaho residents with disabilities received some unexpected bad news. The Department of Health and Welfare had just invested in a “budget tool” – a swish piece of software, built by a private company, that automatically calculated their entitlement to state support. It had declared that their care budgets should be slashed by several thousand dollars each, a decision that would put them at serious risk of being institutionalised.

The problem was that the budget tool’s logic didn’t seem to make much sense. While this particular group of people had deep cuts to their allowance, others in a similar position actually had their benefits increased by the machine. As far as anyone could tell from the outside, the computer was essentially plucking numbers out of thin air.

From the inside, this wasn’t far from the truth. It would take four years and a class-action lawsuit to get to the bottom of what was going on, but when the budget tool was eventually handed over for scrutiny, a hint of what we all have to fear from the unrestricted power of machine decision-making was revealed.

The algorithm was junk. The data was riddled with errors. The calculations were so bad that the court would eventually rule its determinations unconstitutional. It had, effectively, been awarding benefits at random. And yet, when dressed up as a slick and glossy new computer programme, the algorithm brought with it an air of authority that was difficult to argue against.

In the days before proper regulation of medicines, you used to be able to bottle up any coloured liquid and make a small fortune selling it as a cure-all. Today, in the still largely unfettered world of AI and algorithms, we’re seeing people make bold, unsubstantiated and sometimes irresponsible claims about their inventions.

There’s only one way to prevent this. I think it’s time we started treating machines as we would any other source of power. I would like to propose a system of regulation for algorithms, and perhaps a good place to start would be with Tony Benn’s five simple questions, designed for powerful people, but equally applicable to modern AI:

“What power have you got?
“Where did you get it from?
“In whose interests do you use it?
“To whom are you accountable?
“How do we get rid of you?”

Because, ultimately, we can’t just think of algorithms in isolation. We have to think of the failings of the people who design them – and the danger to those they are supposedly designed to serve.

A Look at the Dark Side of the Lives of Some Prominent YouTubers, Who Are Increasingly Saying They’re Stressed, Depressed, Lonely, and Exhausted

Many YouTubers are finding themselves stressed, lonely and exhausted. For years, YouTubers have believed that they are loved most by their audience when they project a chirpy, grateful image. But what happens when the mask slips? This year there has been a wave of videos by prominent YouTubers talking about their burnout, chronic fatigue and depression. “This is all I ever wanted,” said Elle Mills, a 20-year-old Filipino-Canadian YouTuber in a (monetised) video entitled Burnt Out At 19, posted in May. “And why the fuck am I so unfucking unhappy? It doesn’t make any sense. You know what I mean? Because, like, this is literally my fucking dream. And I’m fucking so un-fucking-happy.”

… The anxieties are tied up with the relentless nature of their work. Tyler Blevins, AKA Ninja, makes an estimated $500,000 every month via live broadcasts of him playing the video game Fortnite on Twitch, a service for livestreaming video games that is owned by Amazon. Most of Blevins’ revenue comes from Twitch subscribers or viewers who provide one-off donations (often in the hope that he will thank them by name “on air”). Blevins recently took to Twitter to complain that he didn’t feel he could stop streaming. “Wanna know the struggles of streaming over other jobs?” he wrote, perhaps ill-advisedly for someone with such a stratospheric income. “I left for less than 48 hours and lost 40,000 subscribers on Twitch. I’ll be back today… grinding again.” There was little sympathy on Twitter for the millionaire. But the pressure he described is felt at every level of success, from the titans of the content landscape all the way down to the people with channels with just a few thousand subscribers, all of whom feel they must be constantly creating, always available and responding to their fans.

At the end of the month he was pale, gaunt and tired in a way that, he recalls, seemed “impervious to rest”. His work, he noticed, had become increasingly rushed and harsh in tone. Yet the angry, provocative quality of his videos seemed only to be making them more popular. “Divisive content is the king of online media today, and YouTube heavily boosts anything that riles people up,” he says. “It’s one of the most toxic things: the point at which you’re breaking down is the point at which the algorithm loves you the most.”

“Constant releases build audience loyalty,” says Austin Hourigan, who runs ShoddyCast, a YouTube channel with 1.2 million subscribers. “The more loyalty you build, the more likely your viewers are to come back, which gives you the closest thing to a financial safety net in what is otherwise a capricious space.” When a YouTuber passes the 1 million subscribers mark, they are presented with a gold plaque to mark the event. Many of these plaques can be seen on shelves and walls in the background of presenters’ rooms. In this way, the size of viewership and quantity of uploads become the main markers of value.

Teens Would Rather Text Their Friends Than Talk To Them In Person, Poll Shows

A new poll of 1,141 teenagers shows that teenagers prefer to text their friends than talk in person. The findings come from Common Sense Media’s 2018 Social Media, Social Life survey. Fortune reports:
Only 15% of teens said Facebook was their main social media site, down from 68% in 2012. Snapchat is now the main site for 41% of teenagers, followed by Instagram at 22%. In addition, this year’s survey saw texting (35%) surpass in-person (32%) as teens’ favorite way to communicate with friends. In 2012, 49% preferred to communicate in person, versus 33% who preferred texting.

[M]ore teens said that social media had a positive effect on their levels of loneliness, depression, and anxiety than those who said it had a negative one, but it seems to have the opposite effect on teens who score low on the authors’ social-emotional well-being scale. Of those, 70% said they sometimes feel left out when using social media, 43% feel bad if no one likes or comments on their posts, and 35% said they had been cyberbullied. They were also more likely to say that social media was “extremely” or “every” important, compared to their peers who score high on the scale.

GCHQ mass surveillance violated human rights, court rules

GCHQ’s methods in carrying out bulk interception of online communications violated privacy and failed to provide sufficient surveillance safeguards, the European court of human rights (ECHR) has ruled in a test case judgment.

But the court found that GCHQ’s regime for sharing sensitive digital intelligence with foreign governments was not illegal.

It is the first major challenge to the legality of UK intelligence agencies intercepting private communications in bulk, following Edward Snowden’s whistleblowing revelations. The long-awaited ruling is one of the most comprehensive assessments by the ECHR of the legality of the interception operations operated by UK intelligence agencies.

The case was brought by a coalition of 14 human rights groups, privacy organisations and journalists, including Amnesty International, Liberty, Privacy International and Big Brother Watch. In a statement, published on Amnesty’s website, Lucy Claridge, Amnesty International’s Strategic Litigation Director, said, today’s ruling “represents a significant step forward in the protection of privacy and freedom of expression worldwide. It sends a strong message to the UK Government that its use of extensive surveillance powers is abusive and runs against the very principles that it claims to be defending.” He added: “This is particularly important because of the threat that Government surveillance poses to those who work in human rights and investigative journalism, people who often risk their own lives to speak out. Three years ago, this same case forced the UK Government to admit GCHQ had been spying on Amnesty — a clear sign that our work and the people we work alongside had been put at risk.”

The judges considered three aspects of digital surveillance: bulk interception of communications, intelligence sharing and obtaining of communications data from communications service providers. By a majority of five to two votes, the Strasbourg judges found that GCHQ’s bulk interception regime violated article 8 of the European convention on human rights, which guarantees privacy, because there were said to be insufficient safeguards, and rules governing the selection of “related communications data” were deemed to be inadequate.