Resources

QR codes replace service staff as pandemic spurs automation in US

American workers in manufacturing plants and distribution centres have long worried that their employers would find ways to replace them with robots and artificial intelligence, but the Covid-19 crisis has brought that threat to service workers, too. Businesses are increasingly turning to automated tools for customer service tasks long done by low-wage staff. But rather than robots, it is the ubiquitous QR matrix bar codes that are replacing humans [Editor’s note: the link may be paywalled]. Financial Times:
Many restaurants have begun to experiment with QR codes and order management systems such as Toast that allow diners to order food to their table from their phones instead of with human servers. Grocery stores have increased their investments in self-checkout kiosks that replace human cashiers, and more convenience stores including Circle K are experimenting with the computer vision technology pioneered by Amazon Go to allow customers to make purchases without standing in a checkout line at all. The shifts mean that some of the 1.7m leisure and hospitality jobs and 270,000 retail jobs the US economy has lost since its February 2020 high are unlikely to return.

298

TikTok Lawsuit Highlights How AI Is Screwing Over Voice Actors

With only 30 minutes of audio, companies can now create a digital clone of your voice and make it say words you never said. Using machine learning, voice AI companies like VocaliD can create synthetic voices from a person’s recorded speech — adopting unique qualities like speaking rhythm, pronunciation of consonants and vowels, and intonation. For tech companies, the ability to generate any sentence with a realistic-sounding human voice is an exciting, cost-saving frontier. But for the voice actors whose recordings form the foundation of text-to-speech (TTS) voices, this technology threatens to disrupt their livelihoods, raising questions about fair compensation and human agency in the age of AI.

At the center of this reckoning is voice actress Bev Standing, who is suing TikTok after alleging the company used her voice for its text-to-speech feature without compensation or consent. This is not the first case like this; voice actress Susan Bennett discovered that audio she recorded for another company was repurposed to be the voice of Siri after Apple launched the feature in 2011. She was paid for the initial recording session but not for being Siri. Rallying behind Standing, voice actors donated to a GoFundMe that has raised nearly $7,000 towards her legal expenses and posted TikTok videos under the #StandingWithBev hashtag warning users about the feature. Standing’s supporters say the TikTok lawsuit is not just about Standing’s voice — it’s about the future of an entire industry attempting to adapt to new advancements in the field of machine learning.

Standing’s case materializes some performers’ worst fears about the control this technology gives companies over their voices. Her lawsuit claims TikTok did not pay or notify her to use her likeness for its text-to-speech feature, and that some videos using it voiced “foul and offensive language” causing “irreparable harm” to her reputation. Brands advertising on TikTok also had the text-to-speech voice at their disposal, meaning her voice could be used for explicitly commercial purposes. […] Laws protecting individuals from unauthorized clones of their voices are also in their infancy. Standing’s lawsuit invokes her right of publicity, which grants individuals the right to control commercial uses of their likeness, including their voice. In November 2020, New York became the first state to apply this right to digital replicas after years of advocacy from SAG-AFTRA, a performers’ union.
“We look to make sure that state rights of publicity are as strong as they can be, that any limitations on people being able to protect their image and voice are very narrowly drawn on first amendment lines,” Jeffrey Bennett, a general counsel for SAG-AFTRA, told Motherboard. “We look at this as a potentially great right of publicity case for this voice professional whose voice is being used in a commercial manner without her consent.”

349

Police Are Telling ShotSpotter To Alter Evidence From Gunshot-Detecting AI

On May 31 last year, 25-year-old Safarain Herring was shot in the head and dropped off at St. Bernard Hospital in Chicago by a man named Michael Williams. He died two days later. Chicago police eventually arrested the 64-year-old Williams and charged him with murder (Williams maintains that Herring was hit in a drive-by shooting). A key piece of evidence in the case is video surveillance footage showing Williams’ car stopped on the 6300 block of South Stony Island Avenue at 11:46 p.m. – the time and location where police say they know Herring was shot. How did they know that’s where the shooting happened? Police said ShotSpotter, a surveillance system that uses hidden microphone sensors to detect the sound and location of gunshots, generated an alert for that time and place. Except that’s not entirely true, according to recent court filings.

That night, 19 ShotSpotter sensors detected a percussive sound at 11:46 p.m. and determined the location to be 5700 South Lake Shore Drive – a mile away from the site where prosecutors say Williams committed the murder, according to a motion filed by Williams’ public defender. The company’s algorithms initially classified the sound as a firework. That weekend had seen widespread protests in Chicago in response to George Floyd’s murder, and some of those protesting lit fireworks. But after the 11:46 p.m. alert came in, a ShotSpotter analyst manually overrode the algorithms and “reclassified” the sound as a gunshot. Then, months later and after “post-processing,” another ShotSpotter analyst changed the alert’s coordinates to a location on South Stony Island Drive near where Williams’ car was seen on camera. “Through this human-involved method, the ShotSpotter output in this case was dramatically transformed from data that did not support criminal charges of any kind to data that now forms the centerpiece of the prosecution’s murder case against Mr. Williams,” the public defender wrote in the motion.

The document is what’s known as a Frye motion – a request for a judge to examine and rule on whether a particular forensic method is scientifically valid enough to be entered as evidence. Rather than defend ShotSpotter’s technology and its employees’ actions in a Frye hearing, the prosecutors withdrew all ShotSpotter evidence against Williams. The case isn’t an anomaly, and the pattern it represents could have huge ramifications for ShotSpotter in Chicago, where the technology generates an average of 21,000 alerts each year. The technology is also currently in use in more than 100 cities. Motherboard’s review of court documents from the Williams case and other trials in Chicago and New York State, including testimony from ShotSpotter’s favored expert witness, suggests that the company’s analysts frequently modify alerts at the request of police departments – some of which appear to be grasping for evidence that supports their narrative of events.

291

Fired by Bot at Amazon: ‘It’s You Against the Machine’

Contract drivers say algorithms terminate them by email—even when they have done nothing wrong.

Stephen Normandin spent almost four years racing around Phoenix delivering packages as a contract driver for Amazon.com Inc. Then one day, he received an automated email. The algorithms tracking him had decided he wasn’t doing his job properly.

The 63-year-old Army veteran was stunned. He’d been fired by a machine.

Normandin says Amazon punished him for things beyond his control that prevented him from completing his deliveries, such as locked apartment complexes. Amazon assigned him some pre-dawn deliveries at apartment complexes when their gates were still locked, a common complaint among Flex drivers. The algorithm instructs drivers in such instances to deliver packages to the main office, but that wasn’t open either. Normandin called the customer as instructed—a long shot because most people don’t answer calls from unfamiliar numbers, especially early morning. He called driver support, which couldn’t get through to the customer either. Meanwhile, the clock was ticking, and the algorithm was taking note.

When Ryan Cope was deactivated in 2019, he didn’t bother arguing or consider paying for arbitration. By then, Cope had already decided there was no way he could meet the algorithms’ demands. Driving miles along winding dirt roads outside Denver in the snow, he often shook his head in disbelief that Amazon expected the customer to get the package within two hours.

When drivers do challenge poor ratings, they can’t tell if they’re communicating with real people. Responses often include just a first name or no name at all, and the replies typically apply to a variety of situations rather than a specific problem. Even if a name is attached, a machine most likely generated the first few email responses, according to people familiar with the matter.

When human managers get involved, they typically conduct a hasty review—if they do one at all—because they must meet their own performance standards. A former employee at a driver support call center said dozens of part-time seasonal workers with little training were assigned to oversee issues for millions of drivers.

338

Welcome To Walmart: The Robot Will Grab Your Groceries

Walmart is testing back-of-store automated systems that can collect 800 products an hour, 10 times as many as a store worker. In the backroom of a Walmart store in Salem, N.H., is a floor-to-ceiling robotic system that the country’s largest retailer hopes will help it sell more groceries online. Workers stand on platforms in front of screens assembling online orders of milk, cereal and toilet paper from the hulking automated system. Wheeled robots carrying small baskets move along metal tracks to collect those items. They are bagged for pickup later by shoppers or delivery to homes. Walmart is one of several grocers including Albertsons and Kroger that are using automation to improve efficiency in a fast-growing but costly business that comes with a range of logistical challenges.

The backroom robots could help Walmart cut labor costs and fill orders faster and more accurately. It also could address another problem: unclogging aisles that these days can get crowded with clerks picking products for online orders. A store worker can collect around 80 products from store shelves an hour, estimated John Lert, founder and chief executive of Alert Innovation, the startup that has worked with Walmart to design the system dubbed Alphabot. It is designed to collect 800 products an hour per workstation, operated by a single individual, Mr. Lert said. Workers stock the 24-foot-high machine each day with the products most often ordered online, including refrigerated and frozen foods. Fresh produce is still picked by hand in store aisles.

584

Ex-Google Engineer Says That Robot Weapons May Cause Accidental Mass Killings

“A former Google engineer who worked on the company’s infamous military drone project has sounded a warning against the building of killer robots,” reports Business Insider.

Laura Nolan had been working at Google four years when she was recruited to its collaboration with the US Department of Defense, known as Project Maven, in 2017, according to the Guardian. Project Maven was focused on using AI to enhance military drones, building AI systems which would be able to single out enemy targets and distinguish between people and objects. Google canned Project Maven after employee outrage, with thousands of employees signing a petition against the project and about a dozen quitting in protest. Google allowed the contract to lapse in March this year. Nolan herself resigned after she became “increasingly ethically concerned” about the project, she said…

Nolan fears that the next step beyond AI-enabled weapons like drones could be fully autonomous AI weapons. “What you are looking at are possible atrocities and unlawful killings even under laws of warfare, especially if hundreds or thousands of these machines are deployed,” she said…. Although no country has yet come forward to say it’s working on fully autonomous robot weapons, many are building more and more sophisticated AI to integrate into their militaries. The US navy has a self-piloting warship, capable of spending months at sea with no crew, and Israel boasts of having drones capable of identifying and attacking targets autonomously — although at the moment they require a human middle-man to give the go-ahead.

Nolan is urging countries to declare an outright ban on autonomous killing robots, similar to conventions around the use of chemical weapons.

550

Most Americans say they can’t tell the difference between a social media bot and a human

A new study from Pew Research Center found that most Americans can’t tell social media bots from real humans, and most are convinced bots are bad. “Only 47 percent of Americans are somewhat confident they can identify social media bots from real humans,” reports The Verge. “In contrast, most Americans surveyed in a study about fake news were confident they could identify false stories.”

The Pew study is an uncommon look at what the average person thinks about these automated accounts that plague social media platforms. After surveying over 4,500 adults in the U.S., Pew found that most people actually don’t know much about bots. Two-thirds of Americans have at least heard of social media bots, but only 16 percent say they’ve heard a lot about them, while 34 percent say they’ve never heard of them at all. The knowledgeable tend to be younger, and men are more likely than women (by 22 percentage points) to say they’ve heard of bots. Since the survey results are self-reported, there’s a chance people are overstating or understating their knowledge of bots. Of those who have heard of bots, 80 percent say the accounts are used for bad purposes.

Regardless of whether a person is a Republican or Democrat or young or old, most think that bots are bad. And the more that a person knows about social media bots, the less supportive they are of bots being used for various purposes, like activists drawing attention to topics or a political party using bots to promote candidates.

624

EU Ruling: Self-Driving Car Data Will Be Copyrighted By the Manufacturer

Yesterday, at a routine vote on regulations for self-driving cars, members of the European Peoples’ Party voted down a clause that would protect a vehicle’s telemetry so that it couldn’t become someone’s property. The clause affirmed that “data generated by autonomous transport are automatically generated and are by nature not creative, thus making copyright protection or the right on data-bases inapplicable.” Boing Boing reports:

This is data that we will need to evaluate the safety of autonomous vehicles, to fine-tune their performance, to ensure that they are working as the manufacturer claims — data that will not be public domain (as copyright law dictates), but will instead be someone’s exclusive purview, to release or withhold as they see fit. Who will own this data? It’s unlikely that it will be the owners of the vehicles.

It’s already the case that most auto manufacturers use license agreements and DRM to lock up your car so that you can’t fix it yourself or take it to an independent service center. The aggregated data from millions of self-driving cars across the EU aren’t just useful to public safety analysts, consumer rights advocates, security researchers and reviewers (who would benefit from this data living in the public domain) — it is also a potential gold-mine for car manufacturers who could sell it to insurers, market researchers and other deep-pocketed corporate interests who can profit by hiding that data from the public who generate it and who must share their cities and streets with high-speed killer robots.

644

We hold people with power to account. Why not algorithms?

Robert Jones was driving home through the pretty town of Todmorden, in West Yorkshire, when he noticed the fuel light flashing on the dashboard of his car. He had just a few miles to find a petrol station, which was cutting things rather fine, but thankfully his GPS seemed to have found a short cut – sending him on a narrow winding path up the side of the valley.

Robert followed the machine’s instructions, but as he drove, the road got steeper and narrower. After a couple of miles, it turned into a dirt track, but Robert wasn’t fazed. After all, he thought, he had “no reason not to trust the satnav”.

Just a short while later, anyone who happened to be looking up from the valley below would have seen the nose of Robert’s BMW appearing over the brink of the cliff above, saved from the 100ft drop only by the flimsy wooden fence at the edge he had just crashed into. “It kept insisting the path was a road,” he told the Halifax Courier after the incident. “So I just trusted it. You don’t expect to be taken nearly over a cliff.”

I can imagine Robert was left red-faced by his blunder, but in a way, I think he’s in good company. When it comes to placing blind faith in a piece of software, his mistake was one we’re almost all capable of making. In our urge to automate, in our eagerness to adopt the latest innovations, we appear to have developed a habit of unthinkingly handing over power to machines.

All around us, algorithms provide a kind of convenient source of authority: an easy way to delegate responsibility, a short cut we take without thinking. Who is really going to click through to the second page of Google results every time and think critically about the information that has been served up? Or go to every airline to check if a comparison site is listing the cheapest deals? Or get out a ruler and a road map to confirm that their GPS is offering the shortest route?

But already in our hospitals, our schools, our shops, our courtrooms and our police stations, artificial intelligence is silently working behind the scenes, feeding on our data and making decisions on our behalf. Sure, this technology has the capacity for enormous social good – it can help us diagnose breast cancer, catch serial killers, avoid plane crashes and, as the health secretary, Matt Hancock, has proposed, potentially save lives using NHS data and genomics. Unless we know when to trust our own instincts over the output of a piece of software, however, it also brings the potential for disruption, injustice and unfairness.

If we permit flawed machines to make life-changing decisions on our behalf – by allowing them to pinpoint a murder suspect, to diagnose a condition or take over the wheel of a car – we have to think carefully about what happens when things go wrong.

Back in 2012, a group of 16 Idaho residents with disabilities received some unexpected bad news. The Department of Health and Welfare had just invested in a “budget tool” – a swish piece of software, built by a private company, that automatically calculated their entitlement to state support. It had declared that their care budgets should be slashed by several thousand dollars each, a decision that would put them at serious risk of being institutionalised.

The problem was that the budget tool’s logic didn’t seem to make much sense. While this particular group of people had deep cuts to their allowance, others in a similar position actually had their benefits increased by the machine. As far as anyone could tell from the outside, the computer was essentially plucking numbers out of thin air.

From the inside, this wasn’t far from the truth. It would take four years and a class-action lawsuit to get to the bottom of what was going on, but when the budget tool was eventually handed over for scrutiny, a hint of what we all have to fear from the unrestricted power of machine decision-making was revealed.

The algorithm was junk. The data was riddled with errors. The calculations were so bad that the court would eventually rule its determinations unconstitutional. It had, effectively, been awarding benefits at random. And yet, when dressed up as a slick and glossy new computer programme, the algorithm brought with it an air of authority that was difficult to argue against.

In the days before proper regulation of medicines, you used to be able to bottle up any coloured liquid and make a small fortune selling it as a cure-all. Today, in the still largely unfettered world of AI and algorithms, we’re seeing people make bold, unsubstantiated and sometimes irresponsible claims about their inventions.

There’s only one way to prevent this. I think it’s time we started treating machines as we would any other source of power. I would like to propose a system of regulation for algorithms, and perhaps a good place to start would be with Tony Benn’s five simple questions, designed for powerful people, but equally applicable to modern AI:

“What power have you got?
“Where did you get it from?
“In whose interests do you use it?
“To whom are you accountable?
“How do we get rid of you?”

Because, ultimately, we can’t just think of algorithms in isolation. We have to think of the failings of the people who design them – and the danger to those they are supposedly designed to serve.

679

Artificial intelligence can create a 3D model of a person—from just a few seconds of video

Artificial intelligence has been used to create 3D models of people’s bodies for virtual reality avatars, surveillance, visualizing fashion, or movies. But it typically requires special camera equipment to detect depth or to view someone from multiple angles. A new algorithm creates 3D models using standard video footage from one angle.

The system has three stages. First, it analyzes a video a few seconds long of someone moving—preferably turning 360° to show all sides—and for each frame creates a silhouette separating the person from the background. Based on machine learning techniques—in which computers learn a task from many examples—it roughly estimates the 3D body shape and location of joints. In the second stage, it “unposes” the virtual human created from each frame, making them all stand with arms out in a T shape, and combines information about the T-posed people into one, more accurate model. Finally, in the third stage, it applies color and texture to the model based on recorded hair, clothing, and skin.

The researchers tested the method with a variety of body shapes, clothing, and backgrounds and found that it had an average accuracy within 5 millimeters, they will report in June at the Computer Vision and Pattern Recognition conference in Salt Lake City. The system can also reproduce the folding and wrinkles of fabric, but it struggles with skirts and long hair. With a model of you, the researchers can change your weight, clothing, and pose—and even make you perform a perfect pirouette. No practice necessary.

637