Resources

Amazon scraps secret AI recruiting tool that showed bias against women

An example of how “learning” machines inseparably take in the culture of their architects, ala Lewis Mumford:

“Amazon’s machine-learning specialists uncovered a big problem: their new recruiting engine did not like women. The team had been building computer programs since 2014 to review job applicants’ resumes with the aim of mechanizing the search for top talent, five people familiar with the effort told Reuters. Automation has been key to Amazon’s e-commerce dominance, be it inside warehouses or driving pricing decisions. The company’s experimental hiring tool used artificial intelligence to give job candidates scores ranging from one to five stars — much like shoppers rate products on Amazon, some of the people said. “Everyone wanted this holy grail,” one of the people said. “They literally wanted it to be an engine where I’m going to give you 100 resumes, it will spit out the top five, and we’ll hire those.” But by 2015, the company realized its new system was not rating candidates for software developer jobs and other technical posts in a gender-neutral way. That is because Amazon’s computer models were trained to vet applicants by observing patterns in resumes submitted to the company over a 10-year period. Most came from men, a reflection of male dominance across the tech industry.

[…]

Amazon edited the programs to make them neutral to these particular terms. But that was no guarantee that the machines would not devise other ways of sorting candidates that could prove discriminatory, the people said. The Seattle company ultimately disbanded the team by the start of last year because executives lost hope for the project, according to the people, who spoke on condition of anonymity.

639

The most striking thing about the WikiLeaks CIA data dump is how little most people cared

“On March 7, the US awoke to a fresh cache of internal CIA documents posted on WikiLeaks. They detail the spy organization’s playbook for cracking digital communications.

Snowden’s NSA revelations sent shockwaves around the world. Despite WikiLeaks’ best efforts at theatrics—distributing an encrypted folder and tweeting the password “SplinterItIntoAThousandPiecesAndScatterItIntoTheWinds”—the Vault 7 leak has elicited little more than a shrug from the media and the public, even if the spooks are seriously worried. Maybe it’s because we already assume the government can listen to everything.”

719

Data surveillance is all around us, and it’s going to change our behaviour

“Increasing aspects of our lives are now recorded as digital data that are systematically stored, aggregated, analysed, and sold. Despite the promise of big data to improve our lives, all encompassing data surveillance constitutes a new form of power that poses a risk not only to our privacy, but to our free will.

A more worrying trend is the use of big data to manipulate human behaviour at scale by incentivising “appropriate” activities, and penalising “inappropriate” activities. In recent years, governments in the UK, US, and Australia have been experimenting with attempts to “correct” the behaviour of their citizens through “nudge units”.”

Nudge units: “In ways you don’t detect [corporations and governments are] subtly influencing your decisions, pushing you towards what it believes are your (or its) best interests, exploiting the biases and tics of the human brain uncovered by research into behavioural psychology. And it is trying this in many different ways on many different people, running constant trials of different unconscious pokes and prods, to work out which is the most effective, which improves the most lives, or saves the most money. Preferably, both.”

“In his new book Inside the Nudge Unit, published this week in Britain, Halpern explains his fascination with behavioural psychology.

”Our brains weren’t made for the day-to-day financial judgments that are the foundation of modern economies: from mortgages, to pensions, to the best buy in a supermarket. Our thinking and decisions are fused with emotion.”

There’s a window of opportunity for governments, Halpern believes: to exploit the gaps between perception, reason, emotion and reality, and push us the “right” way.

He gives me a recent example of BI’s work – they were looking at police recruitment, and how to get a wider ethnic mix.

Just before applicants did an online recruitment test, in an email sending the link, BI added a line saying “before you do this, take a moment to think about why joining the police is important to you and your community”.

There was no effect on white applicants. But the pass rate for black and minority ethnic applicants moved from 40 to 60 per cent.

”It entirely closes the gap,” Halpern says. “Absolutely amazing. We thought we had good grounds in the [scientific research] literature that such a prompt might make a difference, but the scale of the difference was extraordinary.

Halpern taught social psychology at Cambridge but spent six years in the Blair government’s strategy unit. An early think piece on behavioural policy-making was leaked to the media and caused a small storm – Blair publicly disowned it and that was that. Halpern returned to academia, but was lured back after similar ideas started propagating through the Obama administration, and Cameron was persuaded to give it a go.

Ministers tend not to like it – once, one snapped, “I didn’t spend a decade in opposition to come into government to run a pilot”, but the technique is rife in the digital commercial world, where companies like Amazon or Google try 20 different versions of a web page.

Governments and public services should do it too, Halpern says. His favourite example is Britain’s organ donor register. They tested eight alternative online messages prompting people to join, including a simple request, different pictures, statistics or conscience-tweaking statements like “if you needed an organ transplant would you have one? If so please help others”.

It’s not obvious which messages work best, even to an expert. The only way to find out is to test them. They were surprised to find that the picture (of a group of people) actually put people off, Halpern says.

In future they want to use demographic data to personalise nudges, Halpern says. On tax reminder notices, they had great success putting the phrase “most people pay their tax on time” at the top. But a stubborn top 5 per cent, with the biggest tax debts, saw this reminder and thought, “Well, I’m not most people”.

This whole approach raises ethical issues. Often you can’t tell people they’re being experimented on – it’s impractical, or ruins the experiment, or both.

”If we’re trying to find the best way of saying ‘don’t drop your litter’ with a sign saying ‘most people don’t drop litter’, are you supposed to have a sign before it saying ‘caution you are about to participate in a trial’?

”Where should we draw the line between effective communication and unacceptable ‘PsyOps’ or propaganda?”

717
Stare Into The Lights My Pretties

Across the United States, police officers abuse confidential databases

“Police officers across the country misuse confidential law enforcement databases to get information on romantic partners, business associates, neighbors, journalists and others for reasons that have nothing to do with daily police work, an Associated Press investigation has found.
[…]In the most egregious cases, officers have used information to stalk or harass, or have tampered with or sold records they obtained.
[…]Unspecified discipline was imposed in more than 90 instances reviewed by AP. In many other cases, it wasn’t clear from the records if punishment was given at all. The number of violations was surely far higher since records provided were spotty at best, and many cases go unnoticed.

Among those punished: an Ohio officer who pleaded guilty to stalking an ex-girlfriend and who looked up information on her; a Michigan officer who looked up home addresses of women he found attractive; and two Miami-Dade officers who ran checks on a journalist after he aired unflattering stories about the department.

”It’s personal. It’s your address. It’s all your information, it’s your Social Security number, it’s everything about you,” said Alexis Dekany, the Ohio woman whose ex-boyfriend, a former Akron officer, pleaded guilty last year to stalking her. “And when they use it for ill purposes to commit crimes against you — to stalk you, to follow you, to harass you … it just becomes so dangerous.”

The misuse represents only a tiny fraction of the millions of daily database queries run legitimately during traffic stops, criminal investigations and routine police encounters. But the worst violations profoundly abuses systems that supply vital information on criminal suspects and law-abiding citizens alike. The unauthorized searches demonstrate how even old-fashioned policing tools are ripe for abuse, at a time when privacy concerns about law enforcement have focused mostly on more modern electronic technologies.”

751
Stare Into The Lights My Pretties

YouTube as a parody of itself?

It never ceases to amaze me just how stupid screen culture is.

But now it’s even parodying itself—in the way only the online spectacle can: by folding back into itself to keep us watching.

The problems and concerns, long since established, are all now just a big joke. Short attention spans. Superficial engagement with information. Advertising masquerading as content. The convergence of extremely powerful corporate empires that influence what we think, feel, and do, in a way never before possible. Distraction from the real world, while the real world burns.

The story of this first short is about the end of the world, and nobody even cares.  Could that be any more close to home?

There’s also a short about an “Uber for people,” invoking the themes of exploitation, surveillance, and the enslavement-addiction to technological solutions that parodies the screen culture of today—especially the mindset of “apps fix all.”

Can we see this as one thing in terms of another?

Likewise with, “Enter the Hive Mind.”

What will you do, when it’s time you’re asked to put your whole self into the global computer even more completely than now? What is your personal threshold? Will you continue to “breathe life” into the machine?

762

The Outrage Machine

This short video explores how the online world has overwhelmingly become the popular outlet for public rage by briefly illustrating some of the many stories of everyday people which have suddenly become public enemy number one under the most misunderstood of circumstances and trivial narratives. With the web acting like a giant echo-chamber, amplifying false stories and feeding on the pent-up aggression of the audience watching the spectacle, The Outrage Machine shows how these systems froth the mob mentality into a hideous mess, as a good example of where the spectacle goes and how its intensity has to keep ratcheting up in order maintain the audience attention, in a culture of dwindling attention spans, distraction and triviality.

Filmmaker and author Jon Ronson also recently wrote a book about this topic too, which is quite good. So You’ve Been Publicly Shamed. His TED talk is essentially a 17 min overview:

And a longer presentation with interview and Q&A from earlier this year:

767

Robot “escapes” lab in Russia, makes a “dash for freedom.”

For all the anthropomorphising, the elements of this story are way less interesting than the way the story is being reported…

“A robot escaped from a science lab and caused a traffic jam in one Russian city, it’s reported. Scientists at the Promobot laboratories in Perm had been teaching the machine how to move around independently, but it broke free after an engineer forgot to shut a gate, says the local edition of the Argumenty i Fakty newspaper. The robot found its way to a nearby street, covering a distance of about 50m (164ft), before its battery ran out, the daily says.”

QZ reports: “It’s happening: A robot escaped a lab in Russia and made a dash for freedom.

“With every passing day, it feels like the robot uprising is getting a little closer. Robots are being beaten down by their human overlords, even as we teach them to get stronger. Now, they’re starting to break free.”

728
Stare Into The Lights My Pretties

Parents are worried the Amazon Echo is conditioning their kids to be rude

“I’ve found my kids pushing the virtual assistant further than they would push a human,” says Avi Greengart, a tech analyst and father of five who lives in Teaneck, New Jersey. “[Alexa] never says ‘That was rude’ or ‘I’m tired of you asking me the same question over and over again.'” Perhaps she should, he thinks. “One of the responsibilities of parents is to teach your kids social graces,” says Greengart, “and this is a box you speak to as if it were a person who does not require social graces.”

Alexa, tell me a knock-knock joke.
Alexa, how do you spell forest?
Alexa, what’s 17 times 42?

The syntax is generally simple and straightforward, but it doesn’t exactly reward niceties like “please.” Adding to this, extraneous words can often trip up the speaker’s artificial intelligence. When it comes to chatting with Alexa, it pays to be direct—curt even. “If it’s not natural language, one of the first things you cut away is the little courtesies,” says Dennis Mortensen, who founded a calendar-scheduling startup called x.ai.

For parents trying to drill good manners into their children, listening to their kids boss Alexa around can be disconcerting.

“One of the responsibilities of parents is to teach your kids social graces,” says Greengart, “and this is a box you speak to as if it were a person who does not require social graces.”

It’s this combination that worries Hunter Walk, a tech investor in San Francisco. In a blog post, he described the Amazon Echo as “magical” while expressing fears it’s “turning our daughter into a raging asshole.”

753

Is Facebook eavesdropping on your phone conversations?

748

“My name is Siri. I really can’t wait until some other app controls your phone.”

Summary: Short article basically speaking to how culture is transmitted, with an underpinning comment about how ubiquitous technology trumps real life relationships, even in small ways, such as real-life people’s names.

“I’ve become slow to respond to my name in public spaces for fear I’ll turn and smile at a stranger scowling into their phone. In protest, I’ve never used the feature and forbade my parents from using it on their iPhones.

“OMG, Siri like the iPhone,” should be engraved on my tombstone.

At worst, people air their grievances against Apple to me.”

702
Stare Into The Lights My Pretties

“RoboCop” deployed to Silicon Valley shopping centre

At the Stanford shopping center in Palo Alto, California, there is a new sheriff in town – and it’s an egg-shaped robot.

“Everyone likes to take robot selfies,” Stephens said. “People really like to interact with the robot.” He said there have even been two instances where the company found lipstick marks on the robot where people had kissed the graffiti-resistant dome.

The slightly comical Dalek design was intentional…”

791

Welcome to the age of the chatbot. Soon you’ll be lonelier than ever.

“Very soon – by the end of the year, probably – you won’t need to be on Facebook in order to talk to your friends on Facebook.

Your Facebook avatar will dutifully wish people happy birthday, congratulate them on the new job, accept invitations, and send them jolly texts punctuated by your favourite emojis – all while you’re asleep, or shopping, or undergoing major surgery.

Using IBM’s powerful Watson natural language processing platform, The Chat Bot Club learns to imitate its user. It learns texting styles, favourite phrases, preferred emojis, repeated opinions – and then it learns to respond in kind, across an ever-broadening range of subjects.”

“Humans aren’t perfect, and AI is a bit the same way,” he said. “AI is not significantly smarter than the people who program it. So AI is always going to encounter circumstances that it was not prepared for.”

730

Silicon Valley tech firms exacerbating income inequality

“Google democratized information, Uber democratized car rides, and Twitter democratized publishing a single sentence. But to the World Bank, the powerful Washington-based organisation that lends money to developing countries, Silicon Valley’s technology firms appear to be exacerbating economic inequality rather than improving it.”

646

Snowden: ‘Governments Can Reduce Our Dignity To That Of Tagged Animals’

“NSA whistleblower Edward Snowden writes a report on The Guardian explaining why leaking information about wrongdoing is a vital act of resistance. “One of the challenges of being a whistleblower is living with the knowledge that people continue to sit, just as you did, at those desks, in that unit, throughout the agency; who see what you saw and comply in silence, without resistance or complaint,” Snowden writes. “They learn to live not just with untruths but with unnecessary untruths, dangerous untruths, corrosive untruths. It is a double tragedy: what begins as a survival strategy ends with the compromise of the human being it sought to preserve and the diminishing of the democracy meant to justify the sacrifice.” He goes on to explain the importance and significance of leaks, how not all leaks are alike, nor are their makers, and how our connected devices come into play in the post-9/11 period. Snowden writes, “By preying on the modern necessity to stay connected, governments can reduce our dignity to something like that of tagged animals, the primary difference being that we paid for the tags and they are in our pockets.”

688

Wikipedia Is Basically a Corporate Bureaucracy

This study, that details the “Evolution of Wikipedia’s Norm Network,” could speak analogously to the supposed “democratisation” that technology pundits constantly invoke when idealising the web, not just in regards to Wikipedia, but even in more general terms about the Screen Culture. Also, mix in a reading of George Orwell’s ‘Animal Farm’ for good measure.

Emphasis added:

“Wikipedia is a voluntary organization dedicated to the noble goal of decentralized knowledge creation. But as the community has evolved over time, it has wandered further and further from its early egalitarian ideals, according to a new paper published in the journal Future Internet. In fact, such systems usually end up looking a lot like 20th-century bureaucracies. […] This may seem surprising, since there is no policing authority on Wikipedia — no established top-down means of control. The community is self-governing, relying primarily on social pressure to enforce the established core norms, according to co-author Simon DeDeo, a complexity scientist at Indiana University. […] “You start with a decentralized democratic system, but over time you get the emergence of a leadership class with privileged access to information and social networks,” DeDeo explained. “Their interests begin to diverge from the rest of the group. They no longer have the same needs and goals. So not only do they come to gain the most power within the system, but they may use it in ways that conflict with the needs of everybody else.”

682

How Big Data Creates False Confidence

“The general idea is to find datasets so enormous that they can reveal patterns invisible to conventional inquiry… But there’s a problem: It’s tempting to think that with such an incredible volume of data behind them, studies relying on big data couldn’t be wrong. But the bigness of the data can imbue the results with a false sense of certainty. Many of them are probably bogus — and the reasons why should give us pause about any research that blindly trusts big data.”

For example, Google’s database of scanned books represents 4% of all books ever published, but in this data set, “The Lord of the Rings gets no more influence than, say, Witchcraft Persecutions in Bavaria.” And the name Lanny appears to be one of the most common in early-20th century fiction — solely because Upton Sinclair published 11 different novels about a character named Lanny Budd.

The problem seems to be skewed data and misinterpretation. (The article points to the failure of Google Flu Trends, which it turns out “was largely predicting winter”.) The article’s conclusion? “Rather than succumb to ‘big data hubris,’ the rest of us would do well to keep our sceptic hats on — even when someone points to billions of words.”

688
Stare Into The Lights My Pretties

How the Internet changed the way we read

“UC Literature Professor Jackson Bliss puts into words something many of you have probably experienced: the evolution of the internet and mobile devices has changed how we read. “The truth is that most of us read continuously in a perpetual stream of incestuous words, but instead of reading novels, book reviews, or newspapers like we used to in the ancien régime, we now read text messages, social media, and bite-sized entries about our protean cultural history on Wikipedia.”

Bliss continues, “In the great epistemic galaxy of words, we have become both reading junkies and also professional text skimmers. … Reading has become a relentless exercise in self-validation, which is why we get impatient when writers don’t come out and simply tell us what they’re arguing. … Content—whether thought-provoking, regurgitated, or analytically superficial, impeccably-researched, politically doctrinaire, or grammatically atrocious—now occupies the same cultural space, the same screen space, and the same mental space in the public imagination. After awhile, we just stop keeping track of what’s legitimately good because it takes too much energy to separate the crème from the foam.”

775

How mass surveillance silences minority opinions

“A new study shows that knowledge of government surveillance causes people to self-censor their dissenting opinions online. The research offers a sobering look at the oft-touted “democratizing” effect of social media and Internet access that bolsters minority opinion.

The study, published in Journalism and Mass Communication Quarterly, studied the effects of subtle reminders of mass surveillance on its subjects. The majority of participants reacted by suppressing opinions that they perceived to be in the minority. This research illustrates the silencing effect of participants’ dissenting opinions in the wake of widespread knowledge of government surveillance, as revealed by whistleblower Edward Snowden in 2013.

The “spiral of silence” is a well-researched phenomenon in which people suppress unpopular opinions to fit in and avoid social isolation. It has been looked at in the context of social media and the echo-chamber effect, in which we tailor our opinions to fit the online activity of our Facebook and Twitter friends. But this study adds a new layer by explicitly examining how government surveillance affects self-censorship.”

700