Resources

SA govt trials home quarantine app with facial recognition, GPS tracking

The South Australian government is set to trial a smartphone app that uses facial recognition and geolocation to confirm a person’s location during home quarantine.

The government will trial the app known as Home Quarantine SA with 50 residents who are required to undertake 14 days of quarantine on return from interstate locations this week.

The app will allow trial participants to “confirm their location with regular check-ins” through a live face check-in feature that uses facial recognition and location verification.

It gives individuals 15 minutes to respond to the random location verification requests at their registered location.

446

US Prisons Mull AI To Analyze Inmate Phone Calls

A key House of Representatives panel has pressed for a report to study the use of artificial intelligence to analyze prisoners’ phone calls. “But prisoners’ advocates and inmates’ families say relying on AI to interpret communications opens up the system to mistakes, misunderstandings and racial bias,” reports Reuters. From the report:
The call for the Department of Justice (DOJ) to further explore the technology, to help prevent violent crime and suicide, accompanies an $81 billion-plus spending bill to fund the DOJ and other federal agencies in 2022 that the Appropriations Committee passed last month. The technology can automatically transcribe inmates’ phone calls, analyzing their patterns of communication and flagging certain words or phrases, including slang, that officials pre-program into the system. A House Democratic aide said in an emailed statement they were encouraging the DOJ “to engage with stakeholders in the course of examining the feasibility of utilizing such a system.”

Several state and local facilities across the country have already started using the tech, including in Alabama, Georgia and New York. The House panel wants the DOJ to look into potentially leveraging the technology for federal use and to identify gaps or shortcomings in the information it produces. Privacy groups say the technology could amplify racial bias in the justice system and unfairly subject prisoners to unaccountable artificial intelligence. Proponents dispute such criticisms, saying the tech is a vital time-saving tool for law enforcement and does not target specific groups.

439

Uber Asked Contractor To Allow Video Surveillance In Employee Homes, Bedrooms

Teleperformance, one of the world’s largest call center companies, is reportedly requiring some employees to consent to video monitoring in their homes. Employees in Colombia told NBC News that their new contract granted the company the right to use AI-powered cameras to observe and record their workspaces. The contract also requires employees to share biometric data like fingerprints and photos of themselves, and workers have to agree to share data and images that may include children under 18.

Teleperformance employs over 380,000 people in 83 countries to provide call center services for a range of companies, including Amazon, Apple, and Uber. A company spokesperson told NBC that it is “constantly looking for ways to enhance the Teleperformance Colombia experience for both our employees and our customers, with privacy and respect as key factors in everything we do.” Amazon and Apple said that they did not ask Teleperformance for this extra monitoring, and an Apple spokesperson said the company forbids video monitoring of employees by suppliers. A recent Apple audit reportedly found Teleperformance in compliance with this requirement. But Uber apparently requested the ability to monitor some workers. Uber said it wouldn’t observe the entire workforce, but the company did not specify which employees would be subject to the new policies. The ride sharing company asked for the monitoring of Teleperformance’s remote employees because call center staff have access to customers credit cards and trip details, an Uber spokesperson told NBC News.

439

Earth is Warming Faster Than Previously Thought, and the Window is Closing To Avoid Catastrophic Outcomes

As the world battles historic droughts, landscape-altering wildfires and deadly floods, a landmark report from global scientists says the window is rapidly closing to cut our reliance on fossil fuels and avoid catastrophic changes that would transform life as we know it. The state-of-the-science report from the United Nations’ Intergovernmental Panel on Climate Change says the world has rapidly warmed 1.1 degrees Celsius higher than pre-industrial levels, and is now careening toward 1.5 degrees — a critical threshold that world leaders agreed warming should remain below to avoid worsening impacts.

Only by making deep cuts to greenhouse gas emissions, while also removing carbon dioxide from the atmosphere, can we halt the precipitous trend. “Bottom line is that we have zero years left to avoid dangerous climate change, because it’s here,” Michael E. Mann, a lead author of the IPCC’s 2001 report, told CNN. Unlike previous assessments, Monday’s report concludes it is “unequivocal” that humans have caused the climate crisis and confirms that “widespread and rapid changes” have already occurred, some of them irreversibly.

That is due in part to the breakneck pace at which the planet has been recently warming, faster than scientists have previously observed. Since 2018, when the panel published a special report on the significance of 1.5-degrees, greenhouse gas emissions have continued mostly unabated and have pushed global temperatures higher. Even under the IPCC’s most optimistic scenario, in which the world’s emissions begin to drop sharply today and are reduced to net zero by 2050, global temperature will still peak above the 1.5-degree threshold before falling. In a statement, UN Secretary-General Antanio Guterres called the report “a code red for humanity,” and noted the 1.5-degree threshold is “perilously close.” “The only way to prevent exceeding this threshold is by urgently stepping up our efforts, and pursuing the most ambitious path,” Guterres said.

391

Leaked Document Says Google Fired Dozens of Employees for Data Misuse

Google has fired dozens of employees between 2018 and 2020 for abusing their access to the company’s tools or data, with some workers potentially facing allegations of accessing Google user or employee data, according to an internal Google document obtained by Motherboard. From a report:
The document provides concrete figures on an often delicate part of a tech giant’s operations: investigations into how company’s own employees leverage their position inside the company to steal, leak, or abuse data they may have access to. Insider abuse is a problem across the tech industry. Motherboard previously uncovered instances at Facebook, Snapchat, and MySpace, with employees in some cases using their access to stalk or otherwise spy on users.

The document says that Google terminated 36 employees in 2020 for security related issues. Eighty-six percent of all security-related allegations against employees included mishandling of confidential information, such as the transfer of internal-only information to outside parties. 10 percent of all allegations in 2020 concerned misuse of systems, which can include accessing user or employee data in violation of Google’s own policies, helping others to access that data, or modifying or deleting user or employee data, according to the document. In 2019, that figure was 13 percent of all security allegations.

378

Facebook Used Facial Recognition Without Consent 200K Times, Says Watchdog

Another [$22,000] penalty was issued for illegally collecting social security numbers, not issuing notifications regarding personal information management changes, and other missteps. Facebook has been ordered to destroy facial information collected without consent or obtain consent, and was prohibited from processing identity numbers without legal basis. It was also ordered to destroy collected data and disclose contents related to foreign migration of personal information. Zuck’s brainchild was then told to make it easier for users to check legal notices regarding personal information. The fine is the second-largest ever issued by the organization, the largest ever also going to Facebook. In November 2020 the Social Network was fined [$5.7 million] for passing on personal data to other operators without user permission.

Netflix’s fine was a paltry [$188,000], with that sum imposed for collecting data from five million people without their consent, plus another [$2,700] for not disclosing international transfer of the data. Google got off the easiest, with just a “recommendation” to improve its personal data handling processes and make legal notices more precise. The PPIC said it is not done investigating methods of collecting personal information from overseas businesses and will continue with a legal review.

356

Researchers Create ‘Master Faces’ To Bypass Facial Recognition

Researchers have demonstrated a method to create “master faces,” computer generated faces that act like master keys for facial recognition systems, and can impersonate several identities with what the researchers claim is a high probability of success. In their paper (PDF), researchers at the Blavatnik School of Computer Science and the School of Electrical Engineering in Tel Aviv detail how they successfully created nine “master key” faces that are able to impersonate almost half the faces in a dataset of three leading face recognition systems. The researchers say their results show these master faces can successfully impersonate over 40 percent of the population in these systems without any additional information or data of the person they are identifying.

The researchers tested their methods against three deep face recognition systems — Dlib, FaceNet, and SphereFace. Lead author Ron Shmelkin told Motherboard that they used these systems because they are capable of recognizing “high-level semantic features” of the faces that are more sophisticated than just skin color or lighting effects. The researchers used a StyleGAN to generate the faces and then used an evolutionary algorithm and neural network to optimize and predict their success. The evolutionary strategy then creates iterations, or generations, of candidates of varying success rates. The researchers then used the algorithm to train a neural network, to classify the best candidates as the most promising ones. This is what teaches it to predict candidates’ success and, in turn, direct the algorithm to generate better candidates with a higher probability of passing. The researchers even predict that their master faces could be animated using deepfake technology to bypass liveness detection, which is used to determine whether a biometric sample is real or fake.

349

Homeless Encampment Grows On Apple Property In Silicon Valley

A large homeless encampment is growing on the site Apple earmarked for its North San Jose campus, two years after Apple made waves with a $2.5 billion pledge to combat the Bay Area’s affordable housing and homelessness crisis. What started as a few RVs parked on the side of Component Drive has grown over the past year into a sprawling camp of dozens of people, a maze of broken-down vehicles and a massive amount of trash scattered across the vacant, Apple-owned property. People with nowhere else to go live there in tents, RVs and wooden structures they built themselves. At least two children call the camp home.

Apple is trying to figure out what to do, but it’s a tough situation. Clearing the camp likely will be difficult both logistically — it’s more challenging to remove structures and vehicles that don’t run than tents — and ethically — there are few places for the displaced residents to go. Apple is “in talks with the city on a solution,” company spokeswoman Chloe Sanchez Sweet wrote in an email, without providing additional details.

The vacant land off Component Drive figured into Apple’s $2.5 billion commitment. Apple originally bought the land in a push to acquire real estate in North San Jose for a new tech campus, but so far, the company hasn’t done much to develop it. In 2019, the tech company promised to make $300 million of land it owns in San Jose available for new affordable housing — including a portion of the Component Drive property. But it’s unclear when anything might be built.

393

Police In At Least 24 Countries Have Used Clearview AI. Find Out Which Ones Here.

Law enforcement agencies and government organizations from 24 countries outside the United States used a controversial facial recognition technology called Clearview AI, according to internal company data reviewed by BuzzFeed News.

That data, which runs up until February 2020, shows that police departments, prosecutors’ offices, universities, and interior ministries from around the world ran nearly 14,000 searches with Clearview AI’s software. At many law enforcement agencies from Canada to Finland, officers used the software without their higher-ups’ knowledge or permission. After receiving questions from BuzzFeed News, some organizations admitted that the technology had been used without leadership oversight.

380

Google Says Staff Have No Right to Protest Its Choice of Clients

Google employees have no legal right to protest the company’s choice of clients, the internet giant told a judge weighing the U.S. government’s allegations that its firings of activists violated the National Labor Relations Act.

“Even if Google had, for the sake of argument, terminated the employees for their protest activities — for protesting their choice of customers — this would not violate the Act,” Google’s attorney Al Latham said in his opening statement Tuesday at a labor board trial.

National Labor Relations Board prosecutors have accused the Alphabet Inc. unit of violating federal law by illegally firing five employees for their activism. Three of those workers’ claims had originally been dismissed under President Donald Trump, because agency prosecutors concluded that their opposition to the company collaborating with immigration enforcement wasn’t legally protected, according to their lawyer. But that decision was reversed after President Joe Biden fired and replaced the labor board’s general counsel.

Google has been roiled over the past four years by a wave of activism by employees challenging management over issues including treatment of sub-contracted staff, handling of sexual harassment, and a contract with the U.S. Customs and Border Protection agency, which some of the fired workers accessed internal information about and circulated a petition against.

Google has denied wrongdoing, saying in a Monday statement that it encourages “open discussion and debate” but terminated staff in response to violations of its data security policies. “Google terminated these employees not because of their protest as such, but because in the pursuit of their protest, they accessed highly confidential information that they had no right to access,” its attorney told the judge Tuesday.

400

Your Credit Score Should Be Based On Your Web History, IMF Says

In a new blog post for the International Monetary Fund, four researchers presented their findings from a working paper that examines the current relationship between finance and tech as well as its potential future. Gazing into their crystal ball, the researchers see the possibility of using the data from your browsing, search, and purchase history to create a more accurate mechanism for determining the credit rating of an individual or business. They believe that this approach could result in greater lending to borrowers who would potentially be denied by traditional financial institutions. At its heart, the paper is trying to wrestle with the dawning notion that the institutional banking system is facing a serious threat from tech companies like Google, Facebook, and Apple. The researchers identify two key areas in which this is true: Tech companies have greater access to soft-information, and messaging platforms can take the place of the physical locations that banks rely on for meeting with customers.

The concept of using your web history to inform credit ratings is framed around the notion that lenders rely on hard-data that might obscure the worthiness of a borrower or paint an unnecessarily dire picture during hard times. Citing soft-data points like “the type of browser and hardware used to access the internet, the history of online searches and purchases” that could be incorporated into evaluating a borrower, the researchers believe that when a lender has a more intimate relationship with the potential client’s history, they might be more willing to cut them some slack. […] But how would all this data be incorporated into credit ratings? Machine learning, of course. It’s black boxes all the way down. The researchers acknowledge that there will be privacy and policy concerns related to incorporating this kind of soft-data into credit analysis. And they do little to explain how this might work in practice.

369

Amazon Plans To Open Its Own Department Stores

The first stores are expected to be located in Ohio and California and will be about 30,000 square feet in size, which would be smaller than the typical department store, the Journal reported. The e-commerce giant, which last year had $386 billion in sales, has been expanding into physical retail in recent years, opening grocery stores, book shops and specialty pop-ups around the country. Analysts say its latest foray — while unexpected — provides an opportunity to reach customers in a new way.

“More stores bolster Amazon’s whole ecosystem and flywheel,” said Neil Saunders, managing director of GlobalData, a research and consulting firm that tracks the retail market. “They also allow Amazon to gather data and to understand consumer preferences better — understanding that can, in turn, be used to improve the whole proposition.” Traditional department stores, he noted, have been declining for years because of a “failure of innovate and adapt.” Stores such as Macy’s, J.C. Penney and Kohl’s, which made up about 15% of retail sales in 1985, now account for less than 3%, Saunders said.

The pandemic has created new challenges for the nation’s department stores, tipping a number of storied chains, including Neiman Marcus, J.C. Penney and Lord & Taylor into bankruptcy. Nearly 200 department stores have permanently closed since last year, and another 800 — or about half the country’s remaining mall-based locations — are expected to shutter by the end of 2025, according to commercial real estate firm Green Street. But for Amazon, this could be an opportunity to shake things up: Its 30,000-square-foot department stores would be about one-third the size of a traditional mall anchor, mirroring plans by many of the country’s retailers to open smaller, more easily-accessible stores. “If it gets rolled out in a serious way, it is very bad news for traditional department stores,” Saunders said. “The lack of innovation by traditional department stores means their defenses are very weak so the last thing they need is to fend off a new invader to their space.”
“The move makes sense,” says Bloomberg Intelligence senior retail analyst Poonam Goyal. “It’ll extend Amazon’s reach beyond Amazon Go, Whole Foods and Kohl’s while also opening up more distribution points. At 30,000 square feet, the locations will be more appealing than traditional on-mall department stores that are three times bigger. Off-mall strip centers could be sought, given their better traffic profile.”

395

People Now Spend More at Amazon Than at Walmart

Propelled in part by surging demand during the pandemic, people spent more than $610 billion on Amazon over the 12 months ending in June, according to Wall Street estimates compiled by the financial research firm FactSet. Walmart on Tuesday posted sales of $566 billion for the 12 months ending in July. Alibaba, the giant online Chinese retailer, is the world’s top seller. Neither Amazon nor Walmart is a dominant player in China.

In racing past Walmart, Amazon has dethroned one of the most successful — and feared — companies of recent decades. Walmart perfected a thriving big-box model of retailing that squeezed every possible penny out of its costs, which drove down prices and vanquished competitors. But even with all of that efficiency and power, the quest to dominate today’s retail environment is being won on the internet. And no company has taken better advantage of that than Amazon. Indeed, the company’s delivery (many items land on doorsteps in a day or two) and wide selection first drew customers to online shopping, and it has kept them buying more there ever since. It has also made Jeff Bezos, the company’s founder, one of the richest people in the world.

397

What Happens When Big Tech’s Datacenters Come to Small Towns?

Few big tech companies that are building and hiring across America bring that wealth with them when they set up in new communities. Instead, they hire armies of low-paid contractors, many of whom are not guaranteed a job from one month to the next; some of the contracting companies have a history of alleged mistreatment of workers. Nor do local governments share in the companies’ wealth; instead, the tech giants negotiate deals — the details protected by non-disclosure agreements — that exempt them from paying taxes that would fund schools, roads and fire departments….

Globally, by the end of 2020, there were nearly 600 “hyperscale” data centers, where a single company runs thousands of servers and rents out cloud space to customers. That’s more than double the number from 2015. Amazon, Google and Microsoft account for more than half of those hyperscale centers, making data centers one more field dominated by America’s richest and biggest companies… Google in March said it was “investing in America” with a plan to spend $7 billion across 19 states to build more data centers and offices. Microsoft said in April that it plans to build 50 to 100 data centers each year for the foreseeable future. Amazon recently got approval to build 1.75 million square feet of data-center space in Northern Virginia, beyond the 50 data centers it already operates there. Facebook said this year it would spend billions to expand data centers in Iowa, Georgia and Utah; in March it said it was adding an 11th building to its largest data-center facility in rural Prineville, Oregon…

Facebook has spent more than $2 billion expanding its operations in Prineville, but because of the tax incentives it negotiated with local officials, the company paid a total of just $119,403.42 in taxes to Crook County last year, according to the County Assessor’s list of top taxpayers. That’s less than half the taxes paid by Brasada Ranch, a local resort. And according to the Oregon Bureau of Labor and Industries, the data center has been the subject of numerous labor complaints… “I’ve spent way too much of my life watching city councils say, ‘We need a big tech company to show that we’re future-focused,'” says Sebastian Moss, the editor of Data Center Dynamics, which tracks the industry. Towns will give away tax breaks worth hundreds of millions of dollars, his reporting has found, and then express gratitude toward tech companies that have donated a few thousand computers — worth a fraction of the tax breaks — to their cash-strapped school systems. “I sometimes wonder if they’re preying on desperation, going to places that are struggling.”

Communities give up more than tax breaks when they welcome tech companies. Data centers use huge amounts of water to cool computer equipment, yet they’re being built in the drought-stricken American West.

The article cites Bureau of Labor Statistics showing that 373,300 Americans were working in data processing, hosting, and related services in June — up 52% from 10 years ago.

379

Activist Raided By London Police After Downloading Docs Found On Google Search

The raid by four Metropolitan Police constables took place after Southwark campaigner Robert Hutchinson was reportedly accused of illegally entering a password-protected area of a website. “I was searching in Google and found links to board meeting minutes,” he told The Register. “Board reports, none of which were marked confidential. So I have no question that it was in the public domain.” The Southwark News reported that Hutchinson was arrested at 8.20am on 10 June this year at home following allegations made by Leathermarket Community Benefit Society (CBS). The society is a property development firm that wants to build flats over a children’s caged ball court in the south London borough, something Hutchinson “vocally opposes,” according to the local paper.

“There’s a directory, which you need to enter a password and a username to get into. But documents from that area were being published on Google,” explained Hutchinson. “I didn’t see a page saying ‘this is the directors’ area’ or anything like that, the documents were just available. They were just linked directly.” Police said in a statement that Hutchinson was arrested on suspicion of breaking section 1 of Britain’s Computer Misuse Act 1990 “between the 17th and 24th February 2021 and had published documents from the website on social media.” They added: “He was taken into custody and later released under investigation. Following a review of all available evidence, it was determined no offences had been committed and no further action was taken.”

Hutchinson said his identification by Leathermarket and subsequent arrest raised questions in his mind, saying police confirmed to him that the company had handed over an access log containing IP addresses: “Now, how that ended up with me being in the frame, I don’t know. There’s part of this that doesn’t add up…” While the property business did not respond to The Register’s request for comment at the time of publication, in a statement given to the Southwark News it said: “When it came to the CBS’s attention that confidential information had been accessed and subsequently shared via Twitter, the CBS made a general report of the data breach to the police â” who requested a full log of visitor access to the website before deciding whether or not to progress. The police carried out their own independent investigation into who accessed the documents and how, and have now concluded their investigation.” The prepared police statement did not explain whether investigators tested Leathermarket CBS’s version of events before arresting the campaigner.

367

Evernote Quietly Disappeared From an Anti-Surveillance Lobbying Group’s Website

In 2013, eight tech companies were accused of funneling their users’ data to the U.S. National Security Agency under the so-called PRISM program, according to highly classified government documents leaked by NSA whistleblower Edward Snowden. Six months later, the tech companies formed a coalition under the name Reform Government Surveillance, which as the name would suggest was to lobby lawmakers for reforms to government surveillance laws. The idea was simple enough: to call on lawmakers to limit surveillance to targeted threats rather than conduct a dragnet collection of Americans’ private data, provide greater oversight and allow companies to be more transparent about the kinds of secret orders for user data that they receive.

Apple, Facebook, Google, LinkedIn, Microsoft, Twitter, Yahoo and AOL were the founding members of Reform Government Surveillance, or RGS, and over the years added Amazon, Dropbox, Evernote, Snap and Zoom as members. But then sometime in June 2019, Evernote quietly disappeared from the RGS website without warning. What’s even more strange is that nobody noticed for two years, not even Evernote. “We hadn’t realized our logo had been removed from the Reform Government Surveillance website,” said an Evernote spokesperson, when reached for comment by TechCrunch. “We are still members.”

384

TikTok Lawsuit Highlights How AI Is Screwing Over Voice Actors

With only 30 minutes of audio, companies can now create a digital clone of your voice and make it say words you never said. Using machine learning, voice AI companies like VocaliD can create synthetic voices from a person’s recorded speech — adopting unique qualities like speaking rhythm, pronunciation of consonants and vowels, and intonation. For tech companies, the ability to generate any sentence with a realistic-sounding human voice is an exciting, cost-saving frontier. But for the voice actors whose recordings form the foundation of text-to-speech (TTS) voices, this technology threatens to disrupt their livelihoods, raising questions about fair compensation and human agency in the age of AI.

At the center of this reckoning is voice actress Bev Standing, who is suing TikTok after alleging the company used her voice for its text-to-speech feature without compensation or consent. This is not the first case like this; voice actress Susan Bennett discovered that audio she recorded for another company was repurposed to be the voice of Siri after Apple launched the feature in 2011. She was paid for the initial recording session but not for being Siri. Rallying behind Standing, voice actors donated to a GoFundMe that has raised nearly $7,000 towards her legal expenses and posted TikTok videos under the #StandingWithBev hashtag warning users about the feature. Standing’s supporters say the TikTok lawsuit is not just about Standing’s voice — it’s about the future of an entire industry attempting to adapt to new advancements in the field of machine learning.

Standing’s case materializes some performers’ worst fears about the control this technology gives companies over their voices. Her lawsuit claims TikTok did not pay or notify her to use her likeness for its text-to-speech feature, and that some videos using it voiced “foul and offensive language” causing “irreparable harm” to her reputation. Brands advertising on TikTok also had the text-to-speech voice at their disposal, meaning her voice could be used for explicitly commercial purposes. […] Laws protecting individuals from unauthorized clones of their voices are also in their infancy. Standing’s lawsuit invokes her right of publicity, which grants individuals the right to control commercial uses of their likeness, including their voice. In November 2020, New York became the first state to apply this right to digital replicas after years of advocacy from SAG-AFTRA, a performers’ union.
“We look to make sure that state rights of publicity are as strong as they can be, that any limitations on people being able to protect their image and voice are very narrowly drawn on first amendment lines,” Jeffrey Bennett, a general counsel for SAG-AFTRA, told Motherboard. “We look at this as a potentially great right of publicity case for this voice professional whose voice is being used in a commercial manner without her consent.”

470

YouTube’s Recommender AI Still a Horror Show, Finds Major Crowdsourced Study

For years YouTube’s video-recommending algorithm has stood accused of fuelling a grab bag of societal ills by feeding users an AI-amplified diet of hate speech, political extremism and/or conspiracy junk/disinformation for the profiteering motive of trying to keep billions of eyeballs stuck to its ad inventory. And while YouTube’s tech giant parent Google has, sporadically, responded to negative publicity flaring up around the algorithm’s antisocial recommendations — announcing a few policy tweaks or limiting/purging the odd hateful account — it’s not clear how far the platform’s penchant for promoting horribly unhealthy clickbait has actually been rebooted. The suspicion remains nowhere near far enough.

New research published today by Mozilla backs that notion up, suggesting YouTube’s AI continues to puff up piles of “bottom-feeding”/low-grade/divisive/disinforming content — stuff that tries to grab eyeballs by triggering people’s sense of outrage, sewing division/polarization or spreading baseless/harmful disinformation — which in turn implies that YouTube’s problem with recommending terrible stuff is indeed systemic; a side effect of the platform’s rapacious appetite to harvest views to serve ads. That YouTube’s AI is still — per Mozilla’s study — behaving so badly also suggests Google has been pretty successful at fuzzing criticism with superficial claims of reform. The mainstay of its deflective success here is likely the primary protection mechanism of keeping the recommender engine’s algorithmic workings (and associated data) hidden from public view and external oversight — via the convenient shield of “commercial secrecy.” But regulation that could help crack open proprietary AI blackboxes is now on the cards — at least in Europe.

442

Researchers Hid Malware Inside An AI’s ‘Neurons’ And It Worked Well

According to a new study, malware can be embedded directly into the artificial neurons that make up machine learning models in a way that keeps them from being detected. The neural network would even be able to continue performing its set tasks normally. Motherboard reports:
“As neural networks become more widely used, this method will be universal in delivering malware in the future,” the authors, from the University of the Chinese Academy of Sciences, write. Using real malware samples, their experiments found that replacing up to around 50 percent of the neurons in the AlexNet model — a benchmark-setting classic in the AI field — with malware still kept the model’s accuracy rate above 93.1 percent. The authors concluded that a 178MB AlexNet model can have up to 36.9MB of malware embedded into its structure without being detected using a technique called steganography. Some of the models were tested against 58 common antivirus systems and the malware was not detected.

Other methods of hacking into businesses or organizations, such as attaching malware to documents or files, often cannot deliver malicious software en masse without being detected. The new research, on the other hand, envisions a future where an organization may bring in an off-the-shelf machine learning model for any given task (say, a chat bot, or image detection) that could be loaded with malware while performing its task well enough not to arouse suspicion. According to the study, this is because AlexNet (like many machine learning models) is made up of millions of parameters and many complex layers of neurons including what are known as fully-connected “hidden” layers. By keeping the huge hidden layers in AlexNet completely intact, the researchers found that changing some other neurons had little effect on performance.

According to the paper, in this approach the malware is “disassembled” when embedded into the network’s neurons, and assembled into functioning malware by a malicious receiver program that can also be used to download the poisoned model via an update. The malware can still be stopped if the target device verifies the model before launching it, according to the paper. It can also be detected using “traditional methods” like static and dynamic analysis. “Today it would not be simple to detect it by antivirus software, but this is only because nobody is looking in there,” cybersecurity researcher and consultant Dr. Lukasz Olejnik told Motherboard. Olejnik also warned that the malware extraction step in the process could also risk detection. Once the malware hidden in the model was compiled into, well, malware, then it could be picked up. It also might just be overkill.

419

YouTube Stars Were Offered Money to Spread Vaccine Misinformation

“A mysterious marketing agency secretly offered to pay social media stars to spread disinformation about Covid-19 vaccines,” reports the BBC.

“Their plan failed when the influencers went public about the attempt to recruit them.”
An influencer marketing agency called Fazze offered to pay [Mirko Drotschmann, a German YouTuber and journalist] to promote what it said was leaked information that suggested the death rate among people who had the Pfizer vaccine was almost three times that of the AstraZeneca jab. The information provided wasn’t true. It quickly became apparent to Mirko that he was being asked to spread disinformation to undermine public confidence in vaccines in the middle of a pandemic. “I was shocked,” says Mirko “then I was curious, what’s behind all that?” In France, science YouTuber Léo Grasset received a similar offer. The agency offered him 2000 euros if he would take part.

Fazze said it was acting for a client who wished to remain anonymous…

Both Léo and Mirko were appalled by the false claims. They pretended to be interested in order to try to find out more and were provided with detailed instructions about what they should say in their videos. In stilted English, the brief instructed them to “Act like you have the passion and interest in this topic.” It told them not to mention the video had a sponsor — and instead pretend they were spontaneously giving advice out of concern for their viewers… Since Léo and Mirko blew the whistle at least four other influencers in France and Germany have gone public to reveal they also rejected Fazze’s attempts to recruit them.

But German journalist, Daniel Laufer, has identified two influencers who may have taken up the offer.

But who’s behind the mysterious influencer marketing agency?
Fazze is a part of AdNow, which is a digital marketing company, registered in both Russia and the UK. The BBC has made multiple attempts to contact AdNow by phone, email and even a letter couriered to their Moscow headquarters, but they have not responded. Eventually we managed to contact Ewan Tolladay, one of two directors of the British arm of AdNow – who lives in Durham. Mr. Tolladay said he had very little to do with Fazze — which he said was a joint venture between his fellow director — a Russian man called Stanislav Fesenko — and another person whose identity he didn’t know… Both the French and German authorities have launched investigations into Fazze’s approaches to influencers. But the identity of the agency’s mystery client remains unclear.

There has been speculation about the Russian connections to this scandal and the interests of the Russian state in promoting its own vaccine — Sputnik V.

French YouTuber Léo Grasset believes we’ll see more attempts to manipulate public opinion, especially young people — apparently because it’s incredibly easy.

“Just spend the same money on TikTok creators, YouTube creators,” they tell the BBC. “The whole ecosystem is perfectly built for maximum efficiency of disinformation right now.”

464