Archives 28 July 2021

2021 > July > 28

Evernote Quietly Disappeared From an Anti-Surveillance Lobbying Group’s Website

In 2013, eight tech companies were accused of funneling their users’ data to the U.S. National Security Agency under the so-called PRISM program, according to highly classified government documents leaked by NSA whistleblower Edward Snowden. Six months later, the tech companies formed a coalition under the name Reform Government Surveillance, which as the name would suggest was to lobby lawmakers for reforms to government surveillance laws. The idea was simple enough: to call on lawmakers to limit surveillance to targeted threats rather than conduct a dragnet collection of Americans’ private data, provide greater oversight and allow companies to be more transparent about the kinds of secret orders for user data that they receive.

Apple, Facebook, Google, LinkedIn, Microsoft, Twitter, Yahoo and AOL were the founding members of Reform Government Surveillance, or RGS, and over the years added Amazon, Dropbox, Evernote, Snap and Zoom as members. But then sometime in June 2019, Evernote quietly disappeared from the RGS website without warning. What’s even more strange is that nobody noticed for two years, not even Evernote. “We hadn’t realized our logo had been removed from the Reform Government Surveillance website,” said an Evernote spokesperson, when reached for comment by TechCrunch. “We are still members.”

TikTok Lawsuit Highlights How AI Is Screwing Over Voice Actors

With only 30 minutes of audio, companies can now create a digital clone of your voice and make it say words you never said. Using machine learning, voice AI companies like VocaliD can create synthetic voices from a person’s recorded speech — adopting unique qualities like speaking rhythm, pronunciation of consonants and vowels, and intonation. For tech companies, the ability to generate any sentence with a realistic-sounding human voice is an exciting, cost-saving frontier. But for the voice actors whose recordings form the foundation of text-to-speech (TTS) voices, this technology threatens to disrupt their livelihoods, raising questions about fair compensation and human agency in the age of AI.

At the center of this reckoning is voice actress Bev Standing, who is suing TikTok after alleging the company used her voice for its text-to-speech feature without compensation or consent. This is not the first case like this; voice actress Susan Bennett discovered that audio she recorded for another company was repurposed to be the voice of Siri after Apple launched the feature in 2011. She was paid for the initial recording session but not for being Siri. Rallying behind Standing, voice actors donated to a GoFundMe that has raised nearly $7,000 towards her legal expenses and posted TikTok videos under the #StandingWithBev hashtag warning users about the feature. Standing’s supporters say the TikTok lawsuit is not just about Standing’s voice — it’s about the future of an entire industry attempting to adapt to new advancements in the field of machine learning.

Standing’s case materializes some performers’ worst fears about the control this technology gives companies over their voices. Her lawsuit claims TikTok did not pay or notify her to use her likeness for its text-to-speech feature, and that some videos using it voiced “foul and offensive language” causing “irreparable harm” to her reputation. Brands advertising on TikTok also had the text-to-speech voice at their disposal, meaning her voice could be used for explicitly commercial purposes. […] Laws protecting individuals from unauthorized clones of their voices are also in their infancy. Standing’s lawsuit invokes her right of publicity, which grants individuals the right to control commercial uses of their likeness, including their voice. In November 2020, New York became the first state to apply this right to digital replicas after years of advocacy from SAG-AFTRA, a performers’ union.
“We look to make sure that state rights of publicity are as strong as they can be, that any limitations on people being able to protect their image and voice are very narrowly drawn on first amendment lines,” Jeffrey Bennett, a general counsel for SAG-AFTRA, told Motherboard. “We look at this as a potentially great right of publicity case for this voice professional whose voice is being used in a commercial manner without her consent.”

YouTube’s Recommender AI Still a Horror Show, Finds Major Crowdsourced Study

For years YouTube’s video-recommending algorithm has stood accused of fuelling a grab bag of societal ills by feeding users an AI-amplified diet of hate speech, political extremism and/or conspiracy junk/disinformation for the profiteering motive of trying to keep billions of eyeballs stuck to its ad inventory. And while YouTube’s tech giant parent Google has, sporadically, responded to negative publicity flaring up around the algorithm’s antisocial recommendations — announcing a few policy tweaks or limiting/purging the odd hateful account — it’s not clear how far the platform’s penchant for promoting horribly unhealthy clickbait has actually been rebooted. The suspicion remains nowhere near far enough.

New research published today by Mozilla backs that notion up, suggesting YouTube’s AI continues to puff up piles of “bottom-feeding”/low-grade/divisive/disinforming content — stuff that tries to grab eyeballs by triggering people’s sense of outrage, sewing division/polarization or spreading baseless/harmful disinformation — which in turn implies that YouTube’s problem with recommending terrible stuff is indeed systemic; a side effect of the platform’s rapacious appetite to harvest views to serve ads. That YouTube’s AI is still — per Mozilla’s study — behaving so badly also suggests Google has been pretty successful at fuzzing criticism with superficial claims of reform. The mainstay of its deflective success here is likely the primary protection mechanism of keeping the recommender engine’s algorithmic workings (and associated data) hidden from public view and external oversight — via the convenient shield of “commercial secrecy.” But regulation that could help crack open proprietary AI blackboxes is now on the cards — at least in Europe.

Researchers Hid Malware Inside An AI’s ‘Neurons’ And It Worked Well

According to a new study, malware can be embedded directly into the artificial neurons that make up machine learning models in a way that keeps them from being detected. The neural network would even be able to continue performing its set tasks normally. Motherboard reports:
“As neural networks become more widely used, this method will be universal in delivering malware in the future,” the authors, from the University of the Chinese Academy of Sciences, write. Using real malware samples, their experiments found that replacing up to around 50 percent of the neurons in the AlexNet model — a benchmark-setting classic in the AI field — with malware still kept the model’s accuracy rate above 93.1 percent. The authors concluded that a 178MB AlexNet model can have up to 36.9MB of malware embedded into its structure without being detected using a technique called steganography. Some of the models were tested against 58 common antivirus systems and the malware was not detected.

Other methods of hacking into businesses or organizations, such as attaching malware to documents or files, often cannot deliver malicious software en masse without being detected. The new research, on the other hand, envisions a future where an organization may bring in an off-the-shelf machine learning model for any given task (say, a chat bot, or image detection) that could be loaded with malware while performing its task well enough not to arouse suspicion. According to the study, this is because AlexNet (like many machine learning models) is made up of millions of parameters and many complex layers of neurons including what are known as fully-connected “hidden” layers. By keeping the huge hidden layers in AlexNet completely intact, the researchers found that changing some other neurons had little effect on performance.

According to the paper, in this approach the malware is “disassembled” when embedded into the network’s neurons, and assembled into functioning malware by a malicious receiver program that can also be used to download the poisoned model via an update. The malware can still be stopped if the target device verifies the model before launching it, according to the paper. It can also be detected using “traditional methods” like static and dynamic analysis. “Today it would not be simple to detect it by antivirus software, but this is only because nobody is looking in there,” cybersecurity researcher and consultant Dr. Lukasz Olejnik told Motherboard. Olejnik also warned that the malware extraction step in the process could also risk detection. Once the malware hidden in the model was compiled into, well, malware, then it could be picked up. It also might just be overkill.

YouTube Stars Were Offered Money to Spread Vaccine Misinformation

“A mysterious marketing agency secretly offered to pay social media stars to spread disinformation about Covid-19 vaccines,” reports the BBC.

“Their plan failed when the influencers went public about the attempt to recruit them.”
An influencer marketing agency called Fazze offered to pay [Mirko Drotschmann, a German YouTuber and journalist] to promote what it said was leaked information that suggested the death rate among people who had the Pfizer vaccine was almost three times that of the AstraZeneca jab. The information provided wasn’t true. It quickly became apparent to Mirko that he was being asked to spread disinformation to undermine public confidence in vaccines in the middle of a pandemic. “I was shocked,” says Mirko “then I was curious, what’s behind all that?” In France, science YouTuber Léo Grasset received a similar offer. The agency offered him 2000 euros if he would take part.

Fazze said it was acting for a client who wished to remain anonymous…

Both Léo and Mirko were appalled by the false claims. They pretended to be interested in order to try to find out more and were provided with detailed instructions about what they should say in their videos. In stilted English, the brief instructed them to “Act like you have the passion and interest in this topic.” It told them not to mention the video had a sponsor — and instead pretend they were spontaneously giving advice out of concern for their viewers… Since Léo and Mirko blew the whistle at least four other influencers in France and Germany have gone public to reveal they also rejected Fazze’s attempts to recruit them.

But German journalist, Daniel Laufer, has identified two influencers who may have taken up the offer.

But who’s behind the mysterious influencer marketing agency?
Fazze is a part of AdNow, which is a digital marketing company, registered in both Russia and the UK. The BBC has made multiple attempts to contact AdNow by phone, email and even a letter couriered to their Moscow headquarters, but they have not responded. Eventually we managed to contact Ewan Tolladay, one of two directors of the British arm of AdNow – who lives in Durham. Mr. Tolladay said he had very little to do with Fazze — which he said was a joint venture between his fellow director — a Russian man called Stanislav Fesenko — and another person whose identity he didn’t know… Both the French and German authorities have launched investigations into Fazze’s approaches to influencers. But the identity of the agency’s mystery client remains unclear.

There has been speculation about the Russian connections to this scandal and the interests of the Russian state in promoting its own vaccine — Sputnik V.

French YouTuber Léo Grasset believes we’ll see more attempts to manipulate public opinion, especially young people — apparently because it’s incredibly easy.

“Just spend the same money on TikTok creators, YouTube creators,” they tell the BBC. “The whole ecosystem is perfectly built for maximum efficiency of disinformation right now.”

For Million of Americans, Unemployment Benefits Require Facial Recognition Scanning

Millions of Americans “are being instructed to use ID.me, along with its facial recognition software, to get their unemployment benefits,” reports CNN. The software compares their photo ID with a selfie video they take on their phone with the company’s software — but some privacy advocates are concerned.

A rapidly growing number of U.S. states, including Colorado, California and New York, turned to ID.me in hopes of cutting down on a surge of fraudulent claims for state and federal benefits that cropped up during the pandemic alongside a tidal wave of authentic unemployment claims. As of this month, 27 states’ unemployment agencies had entered contracts with ID.me, according to the company, with 25 of them already using its technology. ID.me said it is in talks with seven more…

The company’s rapid advance at state unemployment agencies marks the latest chapter in the story of facial recognition software’s spread across the United States. It also highlights how this controversial technology gained a foothold during the pandemic and now appears destined to remain part of our lives for the foreseeable future…

Several ID.me users told CNN Business about problems they had verifying their identities with the company, which ranged from the facial recognition technology failing to recognize their face to waiting for hours to reach a human for a video chat after encountering problems with the technology. A number of people who claim to have had issues with ID.me have taken to social media to beg the company for help with verification, express their own concerns about its face-data collection or simply rant, often in response to ID.me’s own posts on Twitter… From ID.me’s perspective, its service is making it easier for a wide range of people to access essential government services, as it avoids the common practice of using information gleaned from data brokers and credit bureaus as a means of checking identities. The company said this lets it give a green light to those who don’t have a credit history, or may have changed their name, for instance — people who might otherwise have more trouble getting verified.

However, it doesn’t sit well with employee and privacy advocates and civil rights groups interviewed by CNN Business. They have concerns about the facial recognition technology itself and for the ID.me verification process’s reliance on access to a smartphone or computer and the internet, which may be out of reach for the people to whom unemployment dollars are most critical… ID.me said it does not sell user data — which includes biometric and related information such as selfies people upload, data related to facial analyses, and recordings of video chats users participate in with ID.me — but it does keep it. Biometric data, like the facial geometry produced from a user’s selfie, may be kept for years after a user closes their account… In March, ID.me announced raising $100 million in funding from investors including hedge fund Viking Global Investors and CapitalG, which is Google parent company Alphabet’s independent growth fund. With that funding round, ID.me said it was valued at $1.5 billion… “We’re verifying more than 1% of the American adult population each quarter, and that’s starting to compress more to like 45 or 50 days,” Hall said. The company has more than 50 million users, he said, and signs up more than 230,000 new ones each day.

CNN also quotes a man who complains the state never gave him an option. “If I wanted unemployment, I had no choice but to do this.”

Police Are Telling ShotSpotter To Alter Evidence From Gunshot-Detecting AI

On May 31 last year, 25-year-old Safarain Herring was shot in the head and dropped off at St. Bernard Hospital in Chicago by a man named Michael Williams. He died two days later. Chicago police eventually arrested the 64-year-old Williams and charged him with murder (Williams maintains that Herring was hit in a drive-by shooting). A key piece of evidence in the case is video surveillance footage showing Williams’ car stopped on the 6300 block of South Stony Island Avenue at 11:46 p.m. – the time and location where police say they know Herring was shot. How did they know that’s where the shooting happened? Police said ShotSpotter, a surveillance system that uses hidden microphone sensors to detect the sound and location of gunshots, generated an alert for that time and place. Except that’s not entirely true, according to recent court filings.

That night, 19 ShotSpotter sensors detected a percussive sound at 11:46 p.m. and determined the location to be 5700 South Lake Shore Drive – a mile away from the site where prosecutors say Williams committed the murder, according to a motion filed by Williams’ public defender. The company’s algorithms initially classified the sound as a firework. That weekend had seen widespread protests in Chicago in response to George Floyd’s murder, and some of those protesting lit fireworks. But after the 11:46 p.m. alert came in, a ShotSpotter analyst manually overrode the algorithms and “reclassified” the sound as a gunshot. Then, months later and after “post-processing,” another ShotSpotter analyst changed the alert’s coordinates to a location on South Stony Island Drive near where Williams’ car was seen on camera. “Through this human-involved method, the ShotSpotter output in this case was dramatically transformed from data that did not support criminal charges of any kind to data that now forms the centerpiece of the prosecution’s murder case against Mr. Williams,” the public defender wrote in the motion.

The document is what’s known as a Frye motion – a request for a judge to examine and rule on whether a particular forensic method is scientifically valid enough to be entered as evidence. Rather than defend ShotSpotter’s technology and its employees’ actions in a Frye hearing, the prosecutors withdrew all ShotSpotter evidence against Williams. The case isn’t an anomaly, and the pattern it represents could have huge ramifications for ShotSpotter in Chicago, where the technology generates an average of 21,000 alerts each year. The technology is also currently in use in more than 100 cities. Motherboard’s review of court documents from the Williams case and other trials in Chicago and New York State, including testimony from ShotSpotter’s favored expert witness, suggests that the company’s analysts frequently modify alerts at the request of police departments – some of which appear to be grasping for evidence that supports their narrative of events.