Resources

Stare Into The Lights My Pretties >

Survey: 25% Think AI Could Make Better Policy Decisions Than Politicians

A new survey on Europeans’ attitudes towards technology found that a quarter of people would prefer it if policy decisions were made by artificial intelligence instead of politicians. The Center for the Governance of Change at Spain’s IE University polled 2,500 adults in the UK, Spain, Germany, France, Ireland, Italy, and the Netherlands in January. The results reflect an intense anxiety about the changes brought about by advances in tech, with more than half of respondents worried that jobs would be replaced by robots, and 70% saying that unchecked technological innovation could do more harm than good to society. Respondents also expressed concerns about the impact of digital relationships replacing human contact as more people spend time online. Perhaps most interestingly, a quarter of the respondents said they would prefer AI to guide decisions about governance of their country over politicians.

Around the world, citizens have expressed a growing disillusionment with democracy, and an increased skepticism that their voice has an impact on political decisions. But algorithmic decisions aren’t a problem-free solution: they can be embedded with the prejudice and bias of their programmers or manipulated to achieve specific outcomes, making the results as potentially problematic as the ones made by humans. The study also found that respondents expected governments to reduce the disruption that technology might have on their lives with regulation, limits on automation, and support for people affected by job losses. This “highlights the paradox in which we live,” the authors wrote. “People are disillusioned with governments, yet at the same time ask them to tackle the societal and economic negative effects that emerging technologies might have.”

Algorithms viewed as ‘unfair’ by consumers

The US-based Pew Research Center has found the American public is growing increasingly distrustful of the use of computer algorithms in a variety of sectors, including finance, media and the justice system.

report released over the weekend found that a broad section of those surveyed feel that computer programs will always reflect some level of human bias, that they might violate privacy, fail to capture the nuance of human complexity or simply be unfair.

Amazon scraps secret AI recruiting tool that showed bias against women

An example of how “learning” machines inseparably take in the culture of their architects, ala Lewis Mumford:

“Amazon’s machine-learning specialists uncovered a big problem: their new recruiting engine did not like women. The team had been building computer programs since 2014 to review job applicants’ resumes with the aim of mechanizing the search for top talent, five people familiar with the effort told Reuters. Automation has been key to Amazon’s e-commerce dominance, be it inside warehouses or driving pricing decisions. The company’s experimental hiring tool used artificial intelligence to give job candidates scores ranging from one to five stars — much like shoppers rate products on Amazon, some of the people said. “Everyone wanted this holy grail,” one of the people said. “They literally wanted it to be an engine where I’m going to give you 100 resumes, it will spit out the top five, and we’ll hire those.” But by 2015, the company realized its new system was not rating candidates for software developer jobs and other technical posts in a gender-neutral way. That is because Amazon’s computer models were trained to vet applicants by observing patterns in resumes submitted to the company over a 10-year period. Most came from men, a reflection of male dominance across the tech industry.

[…]

Amazon edited the programs to make them neutral to these particular terms. But that was no guarantee that the machines would not devise other ways of sorting candidates that could prove discriminatory, the people said. The Seattle company ultimately disbanded the team by the start of last year because executives lost hope for the project, according to the people, who spoke on condition of anonymity.

An AI-Powered App Has Resulted in an Explosion of Convincing Face-Swap Porn

In December, Motherboard discovered a Redditor named ‘deepfakes’ quietly enjoying his hobby: Face-swapping celebrity faces onto porn performers’ bodies. He made several convincing porn videos of celebrities — including Gal Gadot, Maisie Williams, and Taylor Swift — using a machine learning algorithm, his home computer, publicly available videos, and some spare time. Since we first wrote about deepfakes, the practice of producing AI-assisted fake porn has exploded. More people are creating fake celebrity porn using machine learning, and the results have become increasingly convincing. A redditor even created an app specifically designed to allow users without a computer science background to create AI-assisted fake porn. All the tools one needs to make these videos are free, readily available, and accompanied with instructions that walk novices through the process.

An incredibly easy-to-use application for DIY fake videos—of sex and revenge porn, but also political speeches and whatever else you want—that moves and improves at this pace could have society-changing impacts in the ways we consume media. The combination of powerful, open-source neural network research, our rapidly eroding ability to discern truth from fake news, and the way we spread news through social media has set us up for serious consequences.

The Video Game That Could Shape the Future of War

“As far as video games go, Operation Overmatch is rather unremarkable. Players command military vehicles in eight-on-eight matches against the backdrop of rendered cityscapes — a common setup of games that sometimes have the added advantage of hundreds of millions of dollars in development budgets. Overmatch does have something unique, though: its mission. The game’s developers believe it will change how the U.S. Army fights wars. Overmatch’s players are nearly all soldiers in real life. As they develop tactics around futuristic weapons and use them in digital battle against peers, the game monitors their actions.

Each shot fired and decision made, in addition to messages the players write in private forums, is a bit of information soaked up with a frequency not found in actual combat, or even in high-powered simulations without a wide network of players. The data is logged, sorted, and then analyzed, using insights from sports and commercial video games. Overmatch’s team hopes this data will inform the Army’s decisions about which technologies to purchase and how to develop tactics using them, all with the aim of building a more forward-thinking, prepared force… While the game currently has about 1,000 players recruited by word of mouth and outreach from the Overmatch team, the developers eventually want to involve tens of thousands of soldiers. This milestone would allow for millions of hours of game play per year, according to project estimates, enough to generate rigorous data sets and test hypotheses.”

Brian Vogt, a lieutenant colonel in the Army Capabilities Integration Center who oversees Overmatch’s development, says:

“Right after World War I, we had technologies like aircraft carriers we knew were going to play an important role,” he said. “We just didn’t know how to use them. That’s where we are and what we’re trying to do for robots.”

Ex-Google engineer establishes new religion with ambition to develop an AI god

“One of the engineers behind Google’s self-driving car has established a nonprofit religious corporation with one main aim – to create a deity with artificial intelligence. According to newly uncovered documents filed to the state of California in September 2015, Anthony Levandowski serves as the CEO and president of religious organisation Way of the Future.”

Way of the Future’s startling mission: “To develop and promote the realization of a Godhead based on artificial intelligence and through understanding and worship of the Godhead contribute to the betterment of society.”

Facebook built an AI system that learned to lie to get what it wants

“Facebook researchers used a game to help the bot learn how to haggle over books, hats, and basketballs. Each object had a point value, and they needed to be split between each bot negotiator via text. From the human conversations (gathered via Amazon Mechanical Turk), and testing its skills against itself, the AI system didn’t only learn how to state its demands, but negotiation tactics as well — specifically, lying. Instead of outright saying what it wanted, sometimes the AI would feign interest in a worthless object, only to later concede it for something that it really wanted. Facebook isn’t sure whether it learned from the human hagglers or whether it stumbled upon the trick accidentally, but either way when the tactic worked, it was rewarded.

It’s no surprise that Facebook is working on ways to improve how its bot can interact with others—the company is highly invested in building bots that can negotiate on behalf of users and businesses for its Messenger platform, where it envisions the future of customer service.