Resources

Are We Entering an AI Price-Fixing Dystopia?

“Algorithmic price-fixing appears to be spreading to more and more industries,” warns the Atlantic. “And existing laws may not be equipped to stop it.”

They start with RealPage’s rental-property software (pointing out that “a series of lawsuits says it’s something else: an AI-enabled price-fixing conspiracy” and “The lawsuits also argue that RealPage pressures landlords to comply with its pricing suggestions.”) But the most important point is that RealPage isn’t the only company doing this:
Its main competitor, Yardi, is involved in a similar lawsuit. One of RealPage’s subsidiaries, a service called Rainmaker, faces multiple legal challenges for allegedly facilitating price-fixing in the hotel industry. (Yardi and Rainmaker deny wrongdoing.) Similar complaints have been brought against companies in industries as varied as health insurance, tire manufacturing, and meat processing. But winning these cases is proving difficult.

The article notes that “Agreeing to fix prices is punishable with up to 10 years in prison and a $100 million fine.” But it also notes concerns that algorithms could produce price-fixing-like behavior that’s “almost impossible to prosecute under existing antitrust laws. Price-fixing, in other words, has entered the algorithmic age, but the laws designed to prevent it have not kept up.”
Last week, San Francisco passed a first-of-its-kind ordinance banning “both the sale and use of software which combines non-public competitor data to set, recommend or advise on rents and occupancy levels.”

Whether other jurisdictions follow suit remains to be seen.

In the meantime, more and more companies are figuring out ways to use algorithms to set prices. If these really do enable de facto price-fixing, and manage to escape legal scrutiny, the result could be a kind of pricing dystopia in which competition to create better products and lower prices would be replaced by coordination to keep prices high and profits flowing. That would mean permanently higher costs for consumers — like an inflation nightmare that never ends.

59

A Chess Formula Is Taking Over the World

In October 2003, Mark Zuckerberg created his first viral site: not Facebook, but FaceMash. Then a college freshman, he hacked into Harvard’s online dorm directories, gathered a massive collection of students’ headshots, and used them to create a website on which Harvard students could rate classmates by their attractiveness, literally and figuratively head-to-head. The site, a mean-spirited prank recounted in the opening scene of The Social Network, got so much traction so quickly that Harvard shut down his internet access within hours. The math that powered FaceMash — and, by extension, set Zuckerberg on the path to building the world’s dominant social-media empire — was reportedly, of all things, a formula for ranking chess players: the Elo system.

Fundamentally, what an Elo rating does is predict the outcome of chess matches by assigning every player a number that fluctuates based purely on performance. If you beat a slightly higher-ranked player, your rating goes up a little, but if you beat a much higher-ranked player, your rating goes up a lot (and theirs, conversely, goes down a lot). The higher the rating, the more matches you should win. That is what Elo was designed for, at least. FaceMash and Zuckerberg aside, people have deployed Elo ratings for many sports — soccer, football, basketball — and for domains as varied as dating, finance, and primatology. If something can be turned into a competition, it has probably been Elo-ed. Somehow, a simple chess algorithm has become an all-purpose tool for rating everything. In other words, when it comes to the preferred way to rate things, Elo ratings have the highest Elo rating. […]

Elo ratings don’t inherently have anything to do with chess. They’re based on a simple mathematical formula that works just as well for any one-on-one, zero-sum competition — which is to say, pretty much all sports. In 1997, a statistician named Bob Runyan adapted the formula to rank national soccer teams — a project so successful that FIFA eventually adopted an Elo system for its official rankings. Not long after, the statistician Jeff Sagarin applied Elo to rank NFL teams outside their official league standings. Things really took off when the new ESPN-owned version of Nate Silver’s 538 launched in 2014 and began making Elo ratings for many different sports. Some sports proved trickier than others. NBA basketball in particular exposed some of the system’s shortcomings, Neil Paine, a stats-focused sportswriter who used to work at 538, told me. It consistently underrated heavyweight teams, for example, in large part because it struggled to account for the meaninglessness of much of the regular season and the fact that either team might not be trying all that hard to win a given game. The system assumed uniform motivation across every team and every game. Pretty much anything, it turns out, can be framed as a one-on-one, zero-sum game.
Arpad Emmerich Elo, creator of the Elo rating system, understood the limitations of his invention. “It is a measuring tool, not a device of reward or punishment,” he once remarked. “It is a means to compare performances, assess relative strength, not a carrot waved before a rabbit, or a piece of candy given to a child for good behavior.”

112

Rent Going Up? One Company’s Algorithm Could Be Why

On a summer day last year, a group of real estate tech executives gathered at a conference hall in Nashville to boast about one of their company’s signature products: software that uses a mysterious algorithm to help landlords push the highest possible rents on tenants. “Never before have we seen these numbers,” said Jay Parsons, a vice president of RealPage, as conventiongoers wandered by. Apartment rents had recently shot up by as much as 14.5%, he said in a video touting the company’s services. Turning to his colleague, Parsons asked: What role had the software played?

“I think it’s driving it, quite honestly,” answered Andrew Bowen, another RealPage executive. “As a property manager, very few of us would be willing to actually raise rents double digits within a single month by doing it manually.”

The celebratory remarks were more than swagger. For years, RealPage has sold software that uses data analytics to suggest daily prices for open units. Property managers across the United States have gushed about how the company’s algorithm boosts profits. “The beauty of YieldStar is that it pushes you to go places that you wouldn’t have gone if you weren’t using it,” said Kortney Balas, director of revenue management at JVM Realty, referring to RealPage’s software in a testimonial video on the company’s website. The nation’s largest property management firm, Greystar, found that even in one downturn, its buildings using YieldStar “outperformed their markets by 4.8%,” a significant premium above competitors, RealPage said in materials on its website. Greystar uses RealPage’s software to price tens of thousands of apartments.

RealPage became the nation’s dominant provider of such rent-setting software after federal regulators approved a controversial merger in 2017, a ProPublica investigation found, greatly expanding the company’s influence over apartment prices. The move helped the Texas-based company push the client base for its array of real estate tech services past 31,700 customers.

The impact is stark in some markets. In one neighborhood in Seattle, ProPublica found, 70% of apartments were overseen by just 10 property managers, every single one of which used pricing software sold by RealPage…. But by RealPage’s own admission, its algorithm is helping drive rents higher. “Find out how YieldStar can help you outperform the market 3% to 7%,” RealPage urges potential clients on its website.

224

Company That Makes Rent-Setting Software For Apartments Accused of Collusion, Lawsuit Says

Renters filed a lawsuit (PDF) this week alleging that a company that makes price-setting software for apartments and nine of the nation’s biggest property managers formed a cartel to artificially inflate rents in violation of federal law. The lawsuit was filed days after ProPublica published an investigation raising concerns that the software, sold by Texas-based RealPage, is potentially pushing rent prices above competitive levels, facilitating price fixing or both. […] RealPage’s software uses an algorithm to churn through a trove of data each night to suggest daily prices for available rental units. The software uses not only information about the apartment being priced and the property where it is located, but also private data on what nearby competitors are charging in rents. The software considers actual rents paid to those rivals — not just what they are advertising, the company told ProPublica.

ProPublica’s investigation found that the software’s design and reach have raised questions among experts about whether it is helping the country’s biggest landlords indirectly coordinate pricing — potentially in violation of federal law. In one neighborhood in downtown Seattle, ProPublica found, 70% of more than 9,000 apartments were controlled by just 10 property managers, who all used RealPage pricing software in at least some of their buildings. RealPage told ProPublica that the company “uses aggregated market data from a variety of sources in a legally compliant manner.” The company also said that landlords who use employees to manually set prices “typically” conduct phone surveys to check competitors’ rents, which the company says could result in anti-competitive behavior. “RealPage’s revenue management solutions prioritize a property’s own internal supply/demand dynamics over external factors such as competitors’ rents,” a company statement said, “and therefore help eliminate the risk of collusion that could occur with manual pricing.”

The lawsuit said that RealPage’s software helps stagger lease renewals to artificially smooth out natural imbalances in supply and demand, which discourages landlords from undercutting pricing achieved by the cartel. Property managers “thus held vacant rental units unoccupied for periods of time (rejecting the historical adage to keep the ‘heads in the beds’) to ensure that, collectively, there is not one period in which the market faces an oversupply of residential real estate properties for lease, keeping prices higher,” it said. Such staggering helped the group avoid “a race to the bottom” on rents, the lawsuit said. RealPage brags that clients — who agree to provide RealPage real-time access to sensitive and nonpublic data — experience “rental rate improvements, year over year, between 5% and 12% in every market,” the lawsuit said. RealPage encourages property companies to have daily calls with a RealPage pricing adviser and discourages deviating from the rent price suggested by the software, the lawsuit said.

249

YouTube’s Recommender AI Still a Horror Show, Finds Major Crowdsourced Study

For years YouTube’s video-recommending algorithm has stood accused of fuelling a grab bag of societal ills by feeding users an AI-amplified diet of hate speech, political extremism and/or conspiracy junk/disinformation for the profiteering motive of trying to keep billions of eyeballs stuck to its ad inventory. And while YouTube’s tech giant parent Google has, sporadically, responded to negative publicity flaring up around the algorithm’s antisocial recommendations — announcing a few policy tweaks or limiting/purging the odd hateful account — it’s not clear how far the platform’s penchant for promoting horribly unhealthy clickbait has actually been rebooted. The suspicion remains nowhere near far enough.

New research published today by Mozilla backs that notion up, suggesting YouTube’s AI continues to puff up piles of “bottom-feeding”/low-grade/divisive/disinforming content — stuff that tries to grab eyeballs by triggering people’s sense of outrage, sewing division/polarization or spreading baseless/harmful disinformation — which in turn implies that YouTube’s problem with recommending terrible stuff is indeed systemic; a side effect of the platform’s rapacious appetite to harvest views to serve ads. That YouTube’s AI is still — per Mozilla’s study — behaving so badly also suggests Google has been pretty successful at fuzzing criticism with superficial claims of reform. The mainstay of its deflective success here is likely the primary protection mechanism of keeping the recommender engine’s algorithmic workings (and associated data) hidden from public view and external oversight — via the convenient shield of “commercial secrecy.” But regulation that could help crack open proprietary AI blackboxes is now on the cards — at least in Europe.

422

Fired by Bot at Amazon: ‘It’s You Against the Machine’

Contract drivers say algorithms terminate them by email—even when they have done nothing wrong.

Stephen Normandin spent almost four years racing around Phoenix delivering packages as a contract driver for Amazon.com Inc. Then one day, he received an automated email. The algorithms tracking him had decided he wasn’t doing his job properly.

The 63-year-old Army veteran was stunned. He’d been fired by a machine.

Normandin says Amazon punished him for things beyond his control that prevented him from completing his deliveries, such as locked apartment complexes. Amazon assigned him some pre-dawn deliveries at apartment complexes when their gates were still locked, a common complaint among Flex drivers. The algorithm instructs drivers in such instances to deliver packages to the main office, but that wasn’t open either. Normandin called the customer as instructed—a long shot because most people don’t answer calls from unfamiliar numbers, especially early morning. He called driver support, which couldn’t get through to the customer either. Meanwhile, the clock was ticking, and the algorithm was taking note.

When Ryan Cope was deactivated in 2019, he didn’t bother arguing or consider paying for arbitration. By then, Cope had already decided there was no way he could meet the algorithms’ demands. Driving miles along winding dirt roads outside Denver in the snow, he often shook his head in disbelief that Amazon expected the customer to get the package within two hours.

When drivers do challenge poor ratings, they can’t tell if they’re communicating with real people. Responses often include just a first name or no name at all, and the replies typically apply to a variety of situations rather than a specific problem. Even if a name is attached, a machine most likely generated the first few email responses, according to people familiar with the matter.

When human managers get involved, they typically conduct a hasty review—if they do one at all—because they must meet their own performance standards. A former employee at a driver support call center said dozens of part-time seasonal workers with little training were assigned to oversee issues for millions of drivers.

428

Gig Workers for Target’s Delivery App Hate Their Algorithmically-Determined Pay

In 2017 Target bought a same-day home-delivery company called Shipt for $550 million. Shipt now services half of Target’s stores, reports Motherboard, and employs more than 100,000 gig workers.

Unfortunately, they’re working for a company that “has a track record of censoring and retaliating against workers for asking basic questions about their working conditions or expressing dissent,” reports Motherboard. For example, an hour after tweeting about how there was now much more competition for assignments, one Seattle gig worker found their account suddenly “deactivated” — the equivalent of being fired — and also received an email saying they were no longer “eligible to reapply”.

“They stamp out resistance by flooding the market with new workers…” complained one Shipt worker, “and they’re actively monitoring all the social media groups. ”
On its official national Facebook group, known as the Shipt Shopper Lounge, which has more than 100,000 members, Shipt moderators selected by the company frequently censor and remove posts, turn off comments sections, and ban workers who speak out about their working conditions, according to screenshots, interviews, and other documentation provided to Motherboard. The same is true on local Facebook groups, which Shipt also monitors closely, according to workers. Motherboard spoke to seven current Shipt workers, each of whom described a culture of retaliation, fear, and censorship online…

Because Shipt classifies its workers as contractors, not employees, workers pay for all of their expenses — including gas, wear and tear on their cars, and accidents — out of pocket. They say the tips on large orders from Target, sometimes with hundreds of items, can be meager. Workers say Shipt customers often live in gated and upscale communities and that the app encourages workers to tack on gifts like thank you cards, hot cocoa, flowers, and balloons onto orders (paid for out of their own pocket) and to offer to walk customer’s dogs and take out their trash, as a courtesy. Shipt calls this kind of service “Bringing the Magic,” which can improve workers’ ratings from customers that factor into the algorithm that determines who gets offered the most lucrative orders…

Unfortunately, that new algorithm (which began rolling out last year) is opaque to the workers affected by it — though Gizmodo reported pay appears to be at least 28% lower. And Motherboard heard even higher estimates:
“Our best estimate is that payouts are now 30 percent less, and up to 50 percent on orders,” one Shipt worker in Kalamazoo with two years under her belt, who wished to remain anonymous for fear of retaliation, told Motherboard. “I fluctuate between extreme anger and despair. It’s been three weeks since this has been implemented, and one of my good friends told me that she’s down the equivalent of a car payment.”

Another Shipt worker in Palm Springs, California provided Motherboard with receipts for a 181-item order that included six Snapple cases, five La Croix cases, and 12 packs of soda. They had to wheel three shopping carts out of a Ralph’s grocery store and deliver them — and earned $12.68 for the job. The customer did not tip. (Under the older, more transparent pay model, they would have earned $44.19.) “That’s a real slap in the face,” they told Motherboard.

687

Doctors Are Turning To YouTube To Learn How To Do Surgical Procedures

Some doctors say that medical students and residents are turning to YouTube to fill in gaps in their training. The video-sharing platform hosts tens of thousands of surgery-related videos, and the number keeps climbing every year.

People have livestreamed giving birth and broadcast their face-lifts. One video, which shows the removal of a dense, white cataract, has gone somewhat viral and now has more than 1.7 million views. Others seem to have found crossover appeal with nonmedical viewers, such as a video from the U.K.-based group Audiology Associates showing a weirdly satisfying removal of a giant glob of earwax. Doctors are uploading these videos to market themselves or to help others in the field, and the amount is growing by leaps and bounds. Researchers in January found more than 20,000 videos related to prostate surgery alone, compared with just 500 videos in 2009.

The videos are a particular boon for doctors in training. When the University of Iowa surveyed its surgeons, including its fourth-year medical students and residents, it found that YouTube was the most-used video source for surgical preparation by far. But residents and medical students are not the only ones tuning in. Experienced doctors, like Stanford Hospital’s vascular surgeon Dr. Oliver Aalami said he turned to YouTube recently ahead of a particularly difficult exposure. There’s one problem with this practice that will be familiar to anybody who’s searched YouTube for tips on more mundane tasks like household repairs. How can doctors tell which videos are valid and which contain bogus information?

“[O]ne recent study found more than 68,000 videos associated with a common procedure known as a distal radius fracture immobilization,” the report adds. “The researchers evaluated the content for their technical skill demonstrated and educational skill, and created a score. Only 16 of the videos even met basic criteria, including whether they were performed by a health-care professional or institution. Among those, the scores were mixed. In several cases, the credentials of the person performing the procedure could not be identified at all.”

Other studies are finding that YouTube’s algorithm is highly ranking videos where the technique isn’t optimal.

641

Mozilla is Sharing YouTube Horror Stories To Prod Google For More Transparency

Mozilla is publishing anecdotes of YouTube viewing gone awry — anonymous stories from people who say they innocently searched for one thing but eventually ended up in a dark rabbit hole of videos. It’s a campaign aimed at pressuring Google’s massive video site to make itself more accessible to independent researchers trying to study its algorithms. “The big problem is we have no idea what is happening on YouTube,” said Guillaume Chaslot, who is a fellow at Mozilla, a nonprofit best known for its unit that makes and operates the Firefox web browser.

Chaslot is an ex-Google engineer who has investigated YouTube’s recommendations from the outside after he left the company in 2013. (YouTube says he was fired for performance issues.) “We can see that there are problems, but we have no idea if the problem is from people being people or from algorithms,” he said….

Mozilla is publishing 28 stories it’s terming #YouTubeRegrets; they include, for example, an anecdote from someone who who said a search for German folk songs ended up returning neo-Nazi clips, and a testimonial from a mother who said her 10-year-old daughter searched for tap-dancing videos and ended up watching extreme contortionist clips that affected her body image.

708

Applying For Your Next Job May Be an Automated Nightmare

If you think looking for a job is already daunting, anxiety-riddled, and unpleasant, just wait until the algorithms take over the hiring process. When they do, a newfangled ‘digital recruiter’ like VCV, which just received $1.7 million in early investment, hopes it will look something like this: First, a search bot will be used to scan CVs by the thousands, yours presumably among them. If it’s picked out of the haystack, you will be contacted by a chatbot. Over SMS, the bot will set an appointment for a phone interview, which will be conducted by an automated system enabled by voice recognition AI. Next, the system will ask you, the applicant, to record video responses to a set of predetermined interview questions. Finally, the program can use facial recognition and predictive analytics to complete the screening, algorithmically determining whether the nervousness, mood, and behavior patterns you exhibit make you a fit for the company. If you pass all that, then you will be recommended for an in-person job interview.

[…] VCV, which did not respond to a request for comment, is far from alone here. A growing suite of startups is pitching AI-driven recruitment services, promising to save corporations millions of dollars throughout the hiring process by reducing overhead, to pluck more ideal candidates out of obscurity, and to reduce bias in the hiring process. Most offer little to no evidence of how they actually do so. VCV’s much-larger competitor, HireVue, which has raked in a staggering $93 million in funding and is backed by top-tier Silicon Valley venture capital firms like Sequoia, is hocking many of the same services. It counts 700 companies as its clients, including, it says, Urban Outfitters, Intel, Honeywell, and Unilever. AllyO, which was founded in 2015, and “utilizes deep workflow conversational AI to fully automate end to end recruiting workflow” has $19 million in backing.

712

Facebook Ad Platform Could Be Inherently Discriminatory, Researchers Say

Researchers from Northeastern Unviersity, the University of Southern Carolina, and tech accountability non-profit Upturn have released a paper that says Facebook’s ad delivery system itself can steer ads intended to be inclusive toward discrimination without explicit intent. “In a paper titled, ‘Discrimination through optimization: How Facebook’s ad delivery can lead to skewed outcomes,’ co-authors Muhammad Ali, Piotr Sapiezynski, Miranda Bogen, Aleksandra Korolova, Alan Mislove, and Aaron Rieke find that advertiser budgets and ad content affect ad delivery, skewing it along gender and racial lines even when neutral ad targeting settings are used,” reports The Register. From the report:
The researchers found that Facebook ads tend to be shown to men because women tend to click on ads more often, making them more expensive to reach through Facebook’s system. That divide becomes apparent when ad budgets are compared, because the ad budget affects ad distribution. As the paper explains, “the higher the daily budget, the smaller the fraction of men in the audience.” Such segregation may be appropriate and desirable for certain types of marketing pitches, but when applied to credit, employment and housing ads, the consequences can be problematic.

Ad content — text and images — also has a strong effect on whether ads get shown to men or women, even when the bidding strategy is the same and gender-agnostic targeting is used. In particular, the researchers found images had a surprisingly large effect on ad delivery. Ad URL destination has some effect — an ad pointing to a bodybuilding site and an ad pointing to a cosmetics site had a baseline delivery distribution of 48 percent men and 40 percent men respectively. The addition of a title and headline doesn’t change that much. But once the researchers added an image to the ad, the distribution pattern changed, with the bodybuilding site ad reaching an audience that was 75 percent male and the cosmetics ad reaching an audience that was 90 percent female. According to the researchers, their tests suggest, “Facebook has an automated image classification mechanism in place that is used to steer different ads towards different subsets of the user population.”

“In terms of credit, employment and housing ads, the problem with this system is that it discriminates where it shouldn’t: Five ads for lumber industry jobs were delivered to an audience that was more than 90 percent men and more than 70 percent white; five ads for janitorial work were delivered to an audience that was more than 65 percent women and 75 percent black,” the report adds. “Housing ads also showed a racial skew.”

The latest findings come after years of criticism of Facebook’s ad system. Last month, Facebook announced changes to the platform intended to prevent advertisers from deploying unfair credit, employment and housing ads. One week later, the U.S. Department of Housing and Urban Development sued Facebook for violating the Fair Housing Act.

737

We hold people with power to account. Why not algorithms?

Robert Jones was driving home through the pretty town of Todmorden, in West Yorkshire, when he noticed the fuel light flashing on the dashboard of his car. He had just a few miles to find a petrol station, which was cutting things rather fine, but thankfully his GPS seemed to have found a short cut – sending him on a narrow winding path up the side of the valley.

Robert followed the machine’s instructions, but as he drove, the road got steeper and narrower. After a couple of miles, it turned into a dirt track, but Robert wasn’t fazed. After all, he thought, he had “no reason not to trust the satnav”.

Just a short while later, anyone who happened to be looking up from the valley below would have seen the nose of Robert’s BMW appearing over the brink of the cliff above, saved from the 100ft drop only by the flimsy wooden fence at the edge he had just crashed into. “It kept insisting the path was a road,” he told the Halifax Courier after the incident. “So I just trusted it. You don’t expect to be taken nearly over a cliff.”

I can imagine Robert was left red-faced by his blunder, but in a way, I think he’s in good company. When it comes to placing blind faith in a piece of software, his mistake was one we’re almost all capable of making. In our urge to automate, in our eagerness to adopt the latest innovations, we appear to have developed a habit of unthinkingly handing over power to machines.

All around us, algorithms provide a kind of convenient source of authority: an easy way to delegate responsibility, a short cut we take without thinking. Who is really going to click through to the second page of Google results every time and think critically about the information that has been served up? Or go to every airline to check if a comparison site is listing the cheapest deals? Or get out a ruler and a road map to confirm that their GPS is offering the shortest route?

But already in our hospitals, our schools, our shops, our courtrooms and our police stations, artificial intelligence is silently working behind the scenes, feeding on our data and making decisions on our behalf. Sure, this technology has the capacity for enormous social good – it can help us diagnose breast cancer, catch serial killers, avoid plane crashes and, as the health secretary, Matt Hancock, has proposed, potentially save lives using NHS data and genomics. Unless we know when to trust our own instincts over the output of a piece of software, however, it also brings the potential for disruption, injustice and unfairness.

If we permit flawed machines to make life-changing decisions on our behalf – by allowing them to pinpoint a murder suspect, to diagnose a condition or take over the wheel of a car – we have to think carefully about what happens when things go wrong.

Back in 2012, a group of 16 Idaho residents with disabilities received some unexpected bad news. The Department of Health and Welfare had just invested in a “budget tool” – a swish piece of software, built by a private company, that automatically calculated their entitlement to state support. It had declared that their care budgets should be slashed by several thousand dollars each, a decision that would put them at serious risk of being institutionalised.

The problem was that the budget tool’s logic didn’t seem to make much sense. While this particular group of people had deep cuts to their allowance, others in a similar position actually had their benefits increased by the machine. As far as anyone could tell from the outside, the computer was essentially plucking numbers out of thin air.

From the inside, this wasn’t far from the truth. It would take four years and a class-action lawsuit to get to the bottom of what was going on, but when the budget tool was eventually handed over for scrutiny, a hint of what we all have to fear from the unrestricted power of machine decision-making was revealed.

The algorithm was junk. The data was riddled with errors. The calculations were so bad that the court would eventually rule its determinations unconstitutional. It had, effectively, been awarding benefits at random. And yet, when dressed up as a slick and glossy new computer programme, the algorithm brought with it an air of authority that was difficult to argue against.

In the days before proper regulation of medicines, you used to be able to bottle up any coloured liquid and make a small fortune selling it as a cure-all. Today, in the still largely unfettered world of AI and algorithms, we’re seeing people make bold, unsubstantiated and sometimes irresponsible claims about their inventions.

There’s only one way to prevent this. I think it’s time we started treating machines as we would any other source of power. I would like to propose a system of regulation for algorithms, and perhaps a good place to start would be with Tony Benn’s five simple questions, designed for powerful people, but equally applicable to modern AI:

“What power have you got?
“Where did you get it from?
“In whose interests do you use it?
“To whom are you accountable?
“How do we get rid of you?”

Because, ultimately, we can’t just think of algorithms in isolation. We have to think of the failings of the people who design them – and the danger to those they are supposedly designed to serve.

772

The Seemingly Pervasive Sinister Side of Algorythmic Screen Time for Children

Writer and artist James Bridle writes in Medium:

“Someone or something or some combination of people and things is using YouTube to systematically frighten, traumatize, and abuse children, automatically and at scale, and it forces me to question my own beliefs about the internet, at every level.

To begin: Kid’s YouTube is definitely and markedly weird. I’ve been aware of its weirdness for some time. Last year, there were a number of articles posted about the Surprise Egg craze. Surprise Eggs videos depict, often at excruciating length, the process of unwrapping Kinder and other egg toys. That’s it, but kids are captivated by them. There are thousands and thousands of these videos and thousands and thousands, if not millions, of children watching them. […] What I find somewhat disturbing about the proliferation of even (relatively) normal kids videos is the impossibility of determining the degree of automation which is at work here; how to parse out the gap between human and machine.”

Sapna Maheshwari also explores in The New York Times:

“Parents and children have flocked to Google-owned YouTube Kids since it was introduced in early 2015. The app’s more than 11 million weekly viewers are drawn in by its seemingly infinite supply of clips, including those from popular shows by Disney and Nickelodeon, and the knowledge that the app is supposed to contain only child-friendly content that has been automatically filtered from the main YouTube site. But the app contains dark corners, too, as videos that are disturbing for children slip past its filters, either by mistake or because bad actors have found ways to fool the YouTube Kids algorithms. In recent months, parents like Ms. Burns have complained that their children have been shown videos with well-known characters in violent or lewd situations and other clips with disturbing imagery, sometimes set to nursery rhymes.”

Very horrible and creepy.

915

Facebook: Cracking the Code (2017)

“What’s on your mind?” It’s the friendly Facebook question which lets you share how you’re feeling. It’s also the question that unlocks the details of your life and helps turn your thoughts into profits.

Facebook has the ability to track much of your browsing history, even when you’re not logged on, and even if you aren’t a member of the social network at all. This is one of the methods used to deliver targeted advertising and ‘news’ to your Facebook feed. This is why you are unlikely to see anything that challenges your world view.

This feedback loop is fuelling the rise and power of ‘fake news’. “We’re seeing news that’s tailored ever more tightly towards those kinds of things that people will click on, and will share, rather than things that perhaps are necessarily good for them”, says one Media Analyst.

This information grants huge power to those with access to it. Republican Party strategist Patrick Ruffini says, “What it does give us is much greater level of certainty and granularity and precision down to the individual voter, down to the individual precinct about how things are going to go”. Resultantly, former Facebook journalist, Adam Schrader thinks that there’s “a legitimate argument to this that Facebook influenced the election, the United States Election results.

944

Google’s crowd-sourced CAPTCHA “intelligence” turning invisible

We train the machine so well, and it’s use so ubiquitous, that it can become invisible: Google is making CAPTCHAs invisible using “a combination of machine learning and advanced risk analysis that adapts to new and emerging threats,” Ars Technica reports. Emphasis added.

“The old reCAPTCHA system was pretty easy — just a simple “I’m not a robot” checkbox would get people through your sign-up page. The new version is even simpler, and it doesn’t use a challenge or checkbox. It works invisibly in the background, somehow, to identify bots from humans.

When sites switch over to the invisible CAPTCHA system, most users won’t see CAPTCHAs at all, not even the “I’m not a robot” checkbox. If you are flagged as “suspicious” by the system, then it will display the usual challenges.

reCAPTCHA was bought by Google in 2009 and was used to put unsuspecting website users to work for Google. Some CAPTCHA systems create arbitrary problems for users to solve, but older reCAPTCHA challenges actually used problems Google’s computers needed to solve but couldn’t. Google digitizes millions of books, but sometimes the OCR (optical character recognition) software can’t recognize a word, so that word is sent into the reCAPTCHA system for solving by humans. If you’ve ever solved a reCAPTCHA that looks like a set of numbers, those were from Google’s camera-covered Street View cars, which whizz down the streets and identify house numbers. If the OCR software couldn’t figure out a house number, that number was made into a CAPTCHA for solving by humans. The grid of pictures that would ask you to “select all the cats” was used to train computer image recognition algorithms.”

798
Stare Into The Lights My Pretties

How algorithms (secretly) run the world

“When you browse online for a new pair of shoes, pick a movie to stream on Netflix or apply for a car loan, an algorithm likely has its word to say on the outcome.

The complex mathematical formulas are playing a growing role in all walks of life: from detecting skin cancers to suggesting new Facebook friends, deciding who gets a job, how police resources are deployed, who gets insurance at what cost, or who is on a “no fly” list.

Algorithms are being used—experimentally—to write news articles from raw data, while Donald Trump’s presidential campaign was helped by behavioral marketers who used an algorithm to locate the highest concentrations of “persuadable voters.”

But while such automated tools can inject a measure of objectivity into erstwhile subjective decisions, fears are rising over the lack of transparency algorithms can entail, with pressure growing to apply standards of ethics or “accountability.”

Data scientist Cathy O’Neil cautions about “blindly trusting” formulas to determine a fair outcome.

“Algorithms are not inherently fair, because the person who builds the model defines success,” she said.

O’Neil argues that while some algorithms may be helpful, others can be nefarious. In her 2016 book, “Weapons of Math Destruction,” she cites some troubling examples in the United States:

  • Public schools in Washington DC in 2010 fired more than 200 teachers—including several well-respected instructors—based on scores in an algorithmic formula which evaluated performance.
  • A man diagnosed with bipolar disorder was rejected for employment at seven major retailers after a third-party “personality” test deemed him a high risk based on its algorithmic classification.
  • Many jurisdictions are using “predictive policing” to shift resources to likely “hot spots.” O’Neill says that depending on how data is fed into the system, this could lead to discovery of more minor crimes and a “feedback loop” which stigmatizes poor communities.
  • Some courts rely on computer-ranked formulas to determine jail sentences and parole, which may discriminate against minorities by taking into account “risk” factors such as their neighborhoods and friend or family links to crime.
  • In the world of finance, brokers “scrape” data from online and other sources in new ways to make decisions on credit or insurance. This too often amplifies prejudice against the disadvantaged, O’Neil argues.

Her findings were echoed in a White House report last year warning that algorithmic systems “are not infallible—they rely on the imperfect inputs, logic, probability, and people who design them.”

837

Social media and the anti-fact age

Adam Turner at The Age writes:

“When you look at how social media works, it was inevitable that it would turn into one of the world’s most powerful propaganda tools. It’s often painted as a force for good, letting people bypass the traditional gatekeepers in order to quickly disseminate information, but there’s no guarantee that this information is actually true.

Facebook has usurped the role of the mainstream media in disseminating news, but hasn’t taken on the fourth estate’s corresponding responsibility for keeping the bastards honest. The mainstream media has no-one to blame but itself, having engaged in a tabloid race to the bottom which devalued truth to the point that blatant liars are considered more honest.

The fragmentation of news is already creating a filter bubble in that most people don’t tend to read the newspaper from front to back, or sit through entire news bulletins, they just pick and choose what interests them. The trouble with Facebook is that it also reinforces bias, the more extreme your political views the less likely you are to see anything with an opposing viewpoint which might help you develop a more well-rounded view of the world.”

Brooke Binkowski, the managing editor of the fact-checking at Snopes.com says, “Honestly, most of the fake news is incredibly easy to debunk because it’s such obvious bullshit…”

The problem, Binkowski believes, is that the public has lost faith in the media broadly — therefore no media outlet is considered credible any longer. The reasons are familiar: as the business of news has grown tougher, many outlets have been stripped of the resources they need for journalists to do their jobs correctly. “When you’re on your fifth story of the day and there’s no editor because the editor’s been fired and there’s no fact checker so you have to Google it yourself and you don’t have access to any academic journals or anything like that, you will screw stories up,” she says.”

UPDATE 1/12/2016 — Most students can’t spot fake news

“If you thought fake online news was a problem for impressionable adults, it’s even worse for the younger crowd. A Stanford study of 7,804 middle school, high school and college students has found that most of them couldn’t identify fake news on their own. Their susceptibility varied with age, but even a large number of the older students fell prey to bogus reports. Over two thirds of middle school kids didn’t see why they shouldn’t trust a bank executive’s post claiming that young adults need financial help, while nearly 40 percent of high schoolers didn’t question the link between an unsourced photo and the claims attached to it.

Why did many of the students misjudge the authenticity of a story? They were fixated on the appearance of legitimacy, rather than the quality of information. A large photo or a lot of detail was enough to make a Twitter post seem credible, even if the actual content was incomplete or wrong. There are plenty of adults who respond this way, we’d add, but students are more vulnerable than most.

As the Wall Street Journal explains, part of the solution is simply better education: teach students to verify sources, question motivations and otherwise think critically.”

(Emphasis added)

802

An alarming number of people rely *solely* on a Social Media network for news

Note the stats from Pew Research Center for Journalism and Media, that 64% of users surveyed rely on just one source alone of social media for news content—i.e. Facebook, Twitter, YouTube, etc, while 26% would check only two sources, and 10% three or more: A staggeringly concerning trend, given the rampant personalisation of these screen environments and what we know about the functioning and reinforcement of The Filter Bubble. This is a centralisation of power and lack of diversity and compare/contrast that the “old media” perhaps could only dream of…

From The Huffington Post:

“It’s easy to believe you’re getting diverse perspectives when you see stories on Facebook. You’re connected not just to many of your friends, but also to friends of friends, interesting celebrities and publications you “like.”

But Facebook shows you what it thinks you’ll be interested in. The social network pays attention to what you interact with, what your friends share and comment on, and overall reactions to a piece of content, lumping all of these factors into an algorithm that serves you items you’re likely to engage with. It’s a simple matter of business: Facebook wants you coming back, so it wants to show you things you’ll enjoy.”

BBC also reported earlier this year that Social Media networks outstripped television as the news source for young people (emphasis added):

“Of the 18-to-24-year-olds surveyed, 28% cited social media as their main news source, compared with 24% for TV.

The Reuters Institute for the Study of Journalism research also suggests 51% of people with online access use social media as a news source. Facebook and other social media outlets have moved beyond being “places of news discovery” to become the place people consume their news, it suggests.

The study found Facebook was the most common source—used by 44% of all those surveyed—to watch, share and comment on news. Next came YouTube on 19%, with Twitter on 10%. Apple News accounted for 4% in the US and 3% in the UK, while messaging app Snapchat was used by just 1% or less in most countries.

According to the survey, consumers are happy to have their news selected by algorithms, with 36% saying they would like news chosen based on what they had read before and 22% happy for their news agenda to be based on what their friends had read. But 30% still wanted the human oversight of editors and other journalists in picking the news agenda and many had fears about algorithms creating news “bubbles” where people only see news from like-minded viewpoints.

Most of those surveyed said they used a smartphone to access news, with the highest levels in Sweden (69%), Korea (66%) and Switzerland (61%), and they were more likely to use social media rather than going directly to a news website or app.

The report also suggests users are noticing the original news brand behind social media content less than half of the time, something that is likely to worry traditional media outlets.”

And to exemplify the issue, these words from Slashdot: “Over the past few months, we have seen how Facebook’s Trending Topics feature is often biased, and moreover, how sometimes fake news slips through its filter.”

“The Washington Post monitored the website for over three weeks and found that Facebook is still struggling to get its algorithm right. In the six weeks since Facebook revamped its Trending system, the site has repeatedly promoted “news” stories that are actually works of fiction. As part of a larger audit of Facebook’s Trending topics, the Intersect logged every news story that trended across four accounts during the workdays from Aug. 31 to Sept. 22. During that time, we uncovered five trending stories that were indisputably fake and three that were profoundly inaccurate. On top of that, we found that news releases, blog posts from sites such as Medium and links to online stores such as iTunes regularly trended.”

UPDATE 9/11/16 — US President Barack Obama criticises Facebook for spreading fake stories: “The way campaigns have unfolded, we just start accepting crazy stuff as normal,” Obama said. “As long as it’s on Facebook, and people can see it, as long as its on social media, people start believing it, and it creates this dust cloud of nonsense.”

1452

Machine Logic: Our lives are ruled by big tech’s decisions by data

The Guardian’s Julia Powles writes about how with the advent of artificial intelligence and so-called “machine learning,” this society is increasingly a world where decisions are more shaped by calculations and data analytics rather than traditional human judgement:

“Jose van Dijck, president of the Dutch Royal Academy and the conference’s keynote speaker, expands: Datification is the core logic of what she calls “the platform society,” in which companies bypass traditional institutions, norms and codes by promising something better and more efficient — appealing deceptively to public values, while obscuring private gain. Van Dijck and peers have nascent, urgent ideas. They commence with a pressing agenda for strong interdisciplinary research — something Kate Crawford is spearheading at Microsoft Research, as are many other institutions, including the new Leverhulme Centre for the Future of Intelligence. There’s the old theory to confront, that this is a conscious move on the part of consumers and, if so, there’s always a theoretical opt-out. Yet even digital activists plot by Gmail, concedes Fieke Jansen of the Berlin-based advocacy organisation Tactical Tech. The Big Five tech companies, as well as the extremely concentrated sources of finance behind them, are at the vanguard of “a society of centralized power and wealth. “How did we let it get this far?” she asks. Crawford says there are very practical reasons why tech companies have become so powerful. “We’re trying to put so much responsibility on to individuals to step away from the ‘evil platforms,’ whereas in reality, there are so many reasons why people can’t. The opportunity costs to employment, to their friends, to their families, are so high” she says.”

851
Stare Into The Lights My Pretties

“Faceless” recognition can identify you even when you hide your face

“With widespread adoption among law enforcement, advertisers, and even churches, face recognition has undoubtedly become one of the biggest threats to privacy out there.

By itself, the ability to instantly identify anyone just by seeing their face already creates massive power imbalances, with serious implications for free speech and political protest.”

Microsoft pitches technology that can read facial expressions at political rallies.

“But more recently, researchers have demonstrated that even when faces are blurred or otherwise obscured, algorithms can be trained to identify people by matching previously-observed patterns around their head and body.

In a new paper uploaded to the ArXiv pre-print server, researchers at the Max Planck Institute in Saarbrücken, Germany demonstrate a method of identifying individuals even when most of their photos are un-tagged or obscured. The researchers’ system, which they call the “Faceless Recognition System,” trains a neural network on a set of photos containing both obscured and visible faces, then uses that knowledge to predict the identity of obscured faces by looking for similarities in the area around a person’s head and body.”

[…]

“In the past, Facebook has shown its face recognition algorithms can predict the identity of users when they obscure their face with 83% accuracy, using cues such as their stance and body type. But the researchers say their system is the first to do so using a trainable system that uses a full range of body cues surrounding blurred and blacked-out faces.”

 

897