Archives 2021

What Happens When Big Tech’s Datacenters Come to Small Towns?

Few big tech companies that are building and hiring across America bring that wealth with them when they set up in new communities. Instead, they hire armies of low-paid contractors, many of whom are not guaranteed a job from one month to the next; some of the contracting companies have a history of alleged mistreatment of workers. Nor do local governments share in the companies’ wealth; instead, the tech giants negotiate deals — the details protected by non-disclosure agreements — that exempt them from paying taxes that would fund schools, roads and fire departments….

Globally, by the end of 2020, there were nearly 600 “hyperscale” data centers, where a single company runs thousands of servers and rents out cloud space to customers. That’s more than double the number from 2015. Amazon, Google and Microsoft account for more than half of those hyperscale centers, making data centers one more field dominated by America’s richest and biggest companies… Google in March said it was “investing in America” with a plan to spend $7 billion across 19 states to build more data centers and offices. Microsoft said in April that it plans to build 50 to 100 data centers each year for the foreseeable future. Amazon recently got approval to build 1.75 million square feet of data-center space in Northern Virginia, beyond the 50 data centers it already operates there. Facebook said this year it would spend billions to expand data centers in Iowa, Georgia and Utah; in March it said it was adding an 11th building to its largest data-center facility in rural Prineville, Oregon…

Facebook has spent more than $2 billion expanding its operations in Prineville, but because of the tax incentives it negotiated with local officials, the company paid a total of just $119,403.42 in taxes to Crook County last year, according to the County Assessor’s list of top taxpayers. That’s less than half the taxes paid by Brasada Ranch, a local resort. And according to the Oregon Bureau of Labor and Industries, the data center has been the subject of numerous labor complaints… “I’ve spent way too much of my life watching city councils say, ‘We need a big tech company to show that we’re future-focused,'” says Sebastian Moss, the editor of Data Center Dynamics, which tracks the industry. Towns will give away tax breaks worth hundreds of millions of dollars, his reporting has found, and then express gratitude toward tech companies that have donated a few thousand computers — worth a fraction of the tax breaks — to their cash-strapped school systems. “I sometimes wonder if they’re preying on desperation, going to places that are struggling.”

Communities give up more than tax breaks when they welcome tech companies. Data centers use huge amounts of water to cool computer equipment, yet they’re being built in the drought-stricken American West.

The article cites Bureau of Labor Statistics showing that 373,300 Americans were working in data processing, hosting, and related services in June — up 52% from 10 years ago.

Activist Raided By London Police After Downloading Docs Found On Google Search

The raid by four Metropolitan Police constables took place after Southwark campaigner Robert Hutchinson was reportedly accused of illegally entering a password-protected area of a website. “I was searching in Google and found links to board meeting minutes,” he told The Register. “Board reports, none of which were marked confidential. So I have no question that it was in the public domain.” The Southwark News reported that Hutchinson was arrested at 8.20am on 10 June this year at home following allegations made by Leathermarket Community Benefit Society (CBS). The society is a property development firm that wants to build flats over a children’s caged ball court in the south London borough, something Hutchinson “vocally opposes,” according to the local paper.

“There’s a directory, which you need to enter a password and a username to get into. But documents from that area were being published on Google,” explained Hutchinson. “I didn’t see a page saying ‘this is the directors’ area’ or anything like that, the documents were just available. They were just linked directly.” Police said in a statement that Hutchinson was arrested on suspicion of breaking section 1 of Britain’s Computer Misuse Act 1990 “between the 17th and 24th February 2021 and had published documents from the website on social media.” They added: “He was taken into custody and later released under investigation. Following a review of all available evidence, it was determined no offences had been committed and no further action was taken.”

Hutchinson said his identification by Leathermarket and subsequent arrest raised questions in his mind, saying police confirmed to him that the company had handed over an access log containing IP addresses: “Now, how that ended up with me being in the frame, I don’t know. There’s part of this that doesn’t add up…” While the property business did not respond to The Register’s request for comment at the time of publication, in a statement given to the Southwark News it said: “When it came to the CBS’s attention that confidential information had been accessed and subsequently shared via Twitter, the CBS made a general report of the data breach to the police â” who requested a full log of visitor access to the website before deciding whether or not to progress. The police carried out their own independent investigation into who accessed the documents and how, and have now concluded their investigation.” The prepared police statement did not explain whether investigators tested Leathermarket CBS’s version of events before arresting the campaigner.

Evernote Quietly Disappeared From an Anti-Surveillance Lobbying Group’s Website

In 2013, eight tech companies were accused of funneling their users’ data to the U.S. National Security Agency under the so-called PRISM program, according to highly classified government documents leaked by NSA whistleblower Edward Snowden. Six months later, the tech companies formed a coalition under the name Reform Government Surveillance, which as the name would suggest was to lobby lawmakers for reforms to government surveillance laws. The idea was simple enough: to call on lawmakers to limit surveillance to targeted threats rather than conduct a dragnet collection of Americans’ private data, provide greater oversight and allow companies to be more transparent about the kinds of secret orders for user data that they receive.

Apple, Facebook, Google, LinkedIn, Microsoft, Twitter, Yahoo and AOL were the founding members of Reform Government Surveillance, or RGS, and over the years added Amazon, Dropbox, Evernote, Snap and Zoom as members. But then sometime in June 2019, Evernote quietly disappeared from the RGS website without warning. What’s even more strange is that nobody noticed for two years, not even Evernote. “We hadn’t realized our logo had been removed from the Reform Government Surveillance website,” said an Evernote spokesperson, when reached for comment by TechCrunch. “We are still members.”

TikTok Lawsuit Highlights How AI Is Screwing Over Voice Actors

With only 30 minutes of audio, companies can now create a digital clone of your voice and make it say words you never said. Using machine learning, voice AI companies like VocaliD can create synthetic voices from a person’s recorded speech — adopting unique qualities like speaking rhythm, pronunciation of consonants and vowels, and intonation. For tech companies, the ability to generate any sentence with a realistic-sounding human voice is an exciting, cost-saving frontier. But for the voice actors whose recordings form the foundation of text-to-speech (TTS) voices, this technology threatens to disrupt their livelihoods, raising questions about fair compensation and human agency in the age of AI.

At the center of this reckoning is voice actress Bev Standing, who is suing TikTok after alleging the company used her voice for its text-to-speech feature without compensation or consent. This is not the first case like this; voice actress Susan Bennett discovered that audio she recorded for another company was repurposed to be the voice of Siri after Apple launched the feature in 2011. She was paid for the initial recording session but not for being Siri. Rallying behind Standing, voice actors donated to a GoFundMe that has raised nearly $7,000 towards her legal expenses and posted TikTok videos under the #StandingWithBev hashtag warning users about the feature. Standing’s supporters say the TikTok lawsuit is not just about Standing’s voice — it’s about the future of an entire industry attempting to adapt to new advancements in the field of machine learning.

Standing’s case materializes some performers’ worst fears about the control this technology gives companies over their voices. Her lawsuit claims TikTok did not pay or notify her to use her likeness for its text-to-speech feature, and that some videos using it voiced “foul and offensive language” causing “irreparable harm” to her reputation. Brands advertising on TikTok also had the text-to-speech voice at their disposal, meaning her voice could be used for explicitly commercial purposes. […] Laws protecting individuals from unauthorized clones of their voices are also in their infancy. Standing’s lawsuit invokes her right of publicity, which grants individuals the right to control commercial uses of their likeness, including their voice. In November 2020, New York became the first state to apply this right to digital replicas after years of advocacy from SAG-AFTRA, a performers’ union.
“We look to make sure that state rights of publicity are as strong as they can be, that any limitations on people being able to protect their image and voice are very narrowly drawn on first amendment lines,” Jeffrey Bennett, a general counsel for SAG-AFTRA, told Motherboard. “We look at this as a potentially great right of publicity case for this voice professional whose voice is being used in a commercial manner without her consent.”

YouTube’s Recommender AI Still a Horror Show, Finds Major Crowdsourced Study

For years YouTube’s video-recommending algorithm has stood accused of fuelling a grab bag of societal ills by feeding users an AI-amplified diet of hate speech, political extremism and/or conspiracy junk/disinformation for the profiteering motive of trying to keep billions of eyeballs stuck to its ad inventory. And while YouTube’s tech giant parent Google has, sporadically, responded to negative publicity flaring up around the algorithm’s antisocial recommendations — announcing a few policy tweaks or limiting/purging the odd hateful account — it’s not clear how far the platform’s penchant for promoting horribly unhealthy clickbait has actually been rebooted. The suspicion remains nowhere near far enough.

New research published today by Mozilla backs that notion up, suggesting YouTube’s AI continues to puff up piles of “bottom-feeding”/low-grade/divisive/disinforming content — stuff that tries to grab eyeballs by triggering people’s sense of outrage, sewing division/polarization or spreading baseless/harmful disinformation — which in turn implies that YouTube’s problem with recommending terrible stuff is indeed systemic; a side effect of the platform’s rapacious appetite to harvest views to serve ads. That YouTube’s AI is still — per Mozilla’s study — behaving so badly also suggests Google has been pretty successful at fuzzing criticism with superficial claims of reform. The mainstay of its deflective success here is likely the primary protection mechanism of keeping the recommender engine’s algorithmic workings (and associated data) hidden from public view and external oversight — via the convenient shield of “commercial secrecy.” But regulation that could help crack open proprietary AI blackboxes is now on the cards — at least in Europe.

Researchers Hid Malware Inside An AI’s ‘Neurons’ And It Worked Well

According to a new study, malware can be embedded directly into the artificial neurons that make up machine learning models in a way that keeps them from being detected. The neural network would even be able to continue performing its set tasks normally. Motherboard reports:
“As neural networks become more widely used, this method will be universal in delivering malware in the future,” the authors, from the University of the Chinese Academy of Sciences, write. Using real malware samples, their experiments found that replacing up to around 50 percent of the neurons in the AlexNet model — a benchmark-setting classic in the AI field — with malware still kept the model’s accuracy rate above 93.1 percent. The authors concluded that a 178MB AlexNet model can have up to 36.9MB of malware embedded into its structure without being detected using a technique called steganography. Some of the models were tested against 58 common antivirus systems and the malware was not detected.

Other methods of hacking into businesses or organizations, such as attaching malware to documents or files, often cannot deliver malicious software en masse without being detected. The new research, on the other hand, envisions a future where an organization may bring in an off-the-shelf machine learning model for any given task (say, a chat bot, or image detection) that could be loaded with malware while performing its task well enough not to arouse suspicion. According to the study, this is because AlexNet (like many machine learning models) is made up of millions of parameters and many complex layers of neurons including what are known as fully-connected “hidden” layers. By keeping the huge hidden layers in AlexNet completely intact, the researchers found that changing some other neurons had little effect on performance.

According to the paper, in this approach the malware is “disassembled” when embedded into the network’s neurons, and assembled into functioning malware by a malicious receiver program that can also be used to download the poisoned model via an update. The malware can still be stopped if the target device verifies the model before launching it, according to the paper. It can also be detected using “traditional methods” like static and dynamic analysis. “Today it would not be simple to detect it by antivirus software, but this is only because nobody is looking in there,” cybersecurity researcher and consultant Dr. Lukasz Olejnik told Motherboard. Olejnik also warned that the malware extraction step in the process could also risk detection. Once the malware hidden in the model was compiled into, well, malware, then it could be picked up. It also might just be overkill.

YouTube Stars Were Offered Money to Spread Vaccine Misinformation

“A mysterious marketing agency secretly offered to pay social media stars to spread disinformation about Covid-19 vaccines,” reports the BBC.

“Their plan failed when the influencers went public about the attempt to recruit them.”
An influencer marketing agency called Fazze offered to pay [Mirko Drotschmann, a German YouTuber and journalist] to promote what it said was leaked information that suggested the death rate among people who had the Pfizer vaccine was almost three times that of the AstraZeneca jab. The information provided wasn’t true. It quickly became apparent to Mirko that he was being asked to spread disinformation to undermine public confidence in vaccines in the middle of a pandemic. “I was shocked,” says Mirko “then I was curious, what’s behind all that?” In France, science YouTuber Léo Grasset received a similar offer. The agency offered him 2000 euros if he would take part.

Fazze said it was acting for a client who wished to remain anonymous…

Both Léo and Mirko were appalled by the false claims. They pretended to be interested in order to try to find out more and were provided with detailed instructions about what they should say in their videos. In stilted English, the brief instructed them to “Act like you have the passion and interest in this topic.” It told them not to mention the video had a sponsor — and instead pretend they were spontaneously giving advice out of concern for their viewers… Since Léo and Mirko blew the whistle at least four other influencers in France and Germany have gone public to reveal they also rejected Fazze’s attempts to recruit them.

But German journalist, Daniel Laufer, has identified two influencers who may have taken up the offer.

But who’s behind the mysterious influencer marketing agency?
Fazze is a part of AdNow, which is a digital marketing company, registered in both Russia and the UK. The BBC has made multiple attempts to contact AdNow by phone, email and even a letter couriered to their Moscow headquarters, but they have not responded. Eventually we managed to contact Ewan Tolladay, one of two directors of the British arm of AdNow – who lives in Durham. Mr. Tolladay said he had very little to do with Fazze — which he said was a joint venture between his fellow director — a Russian man called Stanislav Fesenko — and another person whose identity he didn’t know… Both the French and German authorities have launched investigations into Fazze’s approaches to influencers. But the identity of the agency’s mystery client remains unclear.

There has been speculation about the Russian connections to this scandal and the interests of the Russian state in promoting its own vaccine — Sputnik V.

French YouTuber Léo Grasset believes we’ll see more attempts to manipulate public opinion, especially young people — apparently because it’s incredibly easy.

“Just spend the same money on TikTok creators, YouTube creators,” they tell the BBC. “The whole ecosystem is perfectly built for maximum efficiency of disinformation right now.”

For Million of Americans, Unemployment Benefits Require Facial Recognition Scanning

Millions of Americans “are being instructed to use ID.me, along with its facial recognition software, to get their unemployment benefits,” reports CNN. The software compares their photo ID with a selfie video they take on their phone with the company’s software — but some privacy advocates are concerned.

A rapidly growing number of U.S. states, including Colorado, California and New York, turned to ID.me in hopes of cutting down on a surge of fraudulent claims for state and federal benefits that cropped up during the pandemic alongside a tidal wave of authentic unemployment claims. As of this month, 27 states’ unemployment agencies had entered contracts with ID.me, according to the company, with 25 of them already using its technology. ID.me said it is in talks with seven more…

The company’s rapid advance at state unemployment agencies marks the latest chapter in the story of facial recognition software’s spread across the United States. It also highlights how this controversial technology gained a foothold during the pandemic and now appears destined to remain part of our lives for the foreseeable future…

Several ID.me users told CNN Business about problems they had verifying their identities with the company, which ranged from the facial recognition technology failing to recognize their face to waiting for hours to reach a human for a video chat after encountering problems with the technology. A number of people who claim to have had issues with ID.me have taken to social media to beg the company for help with verification, express their own concerns about its face-data collection or simply rant, often in response to ID.me’s own posts on Twitter… From ID.me’s perspective, its service is making it easier for a wide range of people to access essential government services, as it avoids the common practice of using information gleaned from data brokers and credit bureaus as a means of checking identities. The company said this lets it give a green light to those who don’t have a credit history, or may have changed their name, for instance — people who might otherwise have more trouble getting verified.

However, it doesn’t sit well with employee and privacy advocates and civil rights groups interviewed by CNN Business. They have concerns about the facial recognition technology itself and for the ID.me verification process’s reliance on access to a smartphone or computer and the internet, which may be out of reach for the people to whom unemployment dollars are most critical… ID.me said it does not sell user data — which includes biometric and related information such as selfies people upload, data related to facial analyses, and recordings of video chats users participate in with ID.me — but it does keep it. Biometric data, like the facial geometry produced from a user’s selfie, may be kept for years after a user closes their account… In March, ID.me announced raising $100 million in funding from investors including hedge fund Viking Global Investors and CapitalG, which is Google parent company Alphabet’s independent growth fund. With that funding round, ID.me said it was valued at $1.5 billion… “We’re verifying more than 1% of the American adult population each quarter, and that’s starting to compress more to like 45 or 50 days,” Hall said. The company has more than 50 million users, he said, and signs up more than 230,000 new ones each day.

CNN also quotes a man who complains the state never gave him an option. “If I wanted unemployment, I had no choice but to do this.”

Police Are Telling ShotSpotter To Alter Evidence From Gunshot-Detecting AI

On May 31 last year, 25-year-old Safarain Herring was shot in the head and dropped off at St. Bernard Hospital in Chicago by a man named Michael Williams. He died two days later. Chicago police eventually arrested the 64-year-old Williams and charged him with murder (Williams maintains that Herring was hit in a drive-by shooting). A key piece of evidence in the case is video surveillance footage showing Williams’ car stopped on the 6300 block of South Stony Island Avenue at 11:46 p.m. – the time and location where police say they know Herring was shot. How did they know that’s where the shooting happened? Police said ShotSpotter, a surveillance system that uses hidden microphone sensors to detect the sound and location of gunshots, generated an alert for that time and place. Except that’s not entirely true, according to recent court filings.

That night, 19 ShotSpotter sensors detected a percussive sound at 11:46 p.m. and determined the location to be 5700 South Lake Shore Drive – a mile away from the site where prosecutors say Williams committed the murder, according to a motion filed by Williams’ public defender. The company’s algorithms initially classified the sound as a firework. That weekend had seen widespread protests in Chicago in response to George Floyd’s murder, and some of those protesting lit fireworks. But after the 11:46 p.m. alert came in, a ShotSpotter analyst manually overrode the algorithms and “reclassified” the sound as a gunshot. Then, months later and after “post-processing,” another ShotSpotter analyst changed the alert’s coordinates to a location on South Stony Island Drive near where Williams’ car was seen on camera. “Through this human-involved method, the ShotSpotter output in this case was dramatically transformed from data that did not support criminal charges of any kind to data that now forms the centerpiece of the prosecution’s murder case against Mr. Williams,” the public defender wrote in the motion.

The document is what’s known as a Frye motion – a request for a judge to examine and rule on whether a particular forensic method is scientifically valid enough to be entered as evidence. Rather than defend ShotSpotter’s technology and its employees’ actions in a Frye hearing, the prosecutors withdrew all ShotSpotter evidence against Williams. The case isn’t an anomaly, and the pattern it represents could have huge ramifications for ShotSpotter in Chicago, where the technology generates an average of 21,000 alerts each year. The technology is also currently in use in more than 100 cities. Motherboard’s review of court documents from the Williams case and other trials in Chicago and New York State, including testimony from ShotSpotter’s favored expert witness, suggests that the company’s analysts frequently modify alerts at the request of police departments – some of which appear to be grasping for evidence that supports their narrative of events.

Maine Passes Facial Recognition

The new law prohibits government use of facial recognition except in specifically outlined situations, with the most broad exception being if police have probable cause that an unidentified person in an image committed a serious crime, or for proactive fraud prevention. Since Maine police will not have access to facial recognition, they will be able to ask the FBI and Maine Bureau of Motor Vehicles (BMV) to run these searches.

Crucially, the law plugs loopholes that police have used in the past to gain access to the technology, like informally asking other agencies or third parties to run backchannel searches for them. Logs of all facial recognition searches by the BMV must be created and are designated as public records. The only other state-wide facial recognition law was enacted by Washington in 2020, but many privacy advocates were dissatisfied with the specifics of the law. Maine’s new law also gives citizens the ability to sue the state if they’ve been unlawfully targeted by facial recognition, which was notably absent from Washington’s regulation. If facial recognition searches are performed illegally, they must be deleted and cannot be used as evidence.

A Government Watchdog May Have Missed Clearview AI Use By Five Federal Agencies

A government inquiry into federal agencies’ deployment of facial recognition may have overlooked some organizations’ use of popular biometric identification software Clearview AI, calling into question whether authorities can understand the extent to which the emerging technology has been used by taxpayer-funded entities. In a 92-page report published by the Government Accountability Office on Tuesday, five agencies — the US Capitol Police, the US Probation Office, the Pentagon Force Protection Agency, Transportation Security Administration, and the Criminal Investigation Division at the Internal Revenue Service — said they didn’t use Clearview AI between April 2018 and March 2020. This, however, contradicts internal Clearview data previously reviewed by BuzzFeed News.

In April, BuzzFeed News revealed that those five agencies were among more than 1,800 US taxpayer-funded entities that had employees who tried or used Clearview AI, based on internal company data. As part of that story, BuzzFeed News published a searchable table disclosing all the federal, state, and city government organizations whose employees are listed in the data as having used the facial recognition software as of February 2020. While the GAO was tasked with “review[ing] federal law enforcement use of facial recognition technology,” the discrepancies between the report, which was based on survey responses and BuzzFeed News’ past reporting, suggest that even the US government may not be equipped to track how its own agencies access to surveillance tools like Clearview. The GAO report surveyed 42 federal agencies in total, 20 of which reported that they either owned their own facial recognition system or used one developed by a third party between April 2018 and March 2020. Ten federal agencies — including Immigration and Customs Enforcement and Customs and Border Protection — said they specifically used Clearview AI.

Facebook Tests Prompts That Ask Users If They’re Worried a Friend is ‘Becoming an Extremist’

Some Facebook users in the United States are being served a prompt that asks if they are worried that someone they know might be becoming an extremist. Others are being notified that they may have been exposed to extremist content.

It is all part of a test the social media company is running that stems from its Redirect Initiative, which aims to combat violent extremism, Andy Stone, a Facebook spokesperson, told CNN. Screen shots of the alerts surfaced on social media Thursday. “This test is part of our larger work to assess ways to provide resources and support to people on Facebook who may have engaged with or were exposed to extremist content, or may know someone who is at risk,” Stone said. “We are partnering with NGOs and academic experts in this space and hope to have more to share in the future,” Stone added. One of the alerts, a screen grab of which made the rounds on social media Thursday, asks users, “Are you concerned that someone you know is becoming an extremist?” “We care about preventing extremism on Facebook,” explained that alert, according to a screen grab posted on social media. “Others in your situation have received confidential support.”

How Big Tech created a data ‘treasure trove’ for police

When U.S. law enforcement officials need to cast a wide net for information, they’re increasingly turning to the vast digital ponds of personal data created by Big Tech companies via the devices and online services that have hooked billions of people around the world.

Data compiled by four of the biggest tech companies shows that law enforcement requests for user information — phone calls, emails, texts, photos, shopping histories, driving routes and more — have more than tripled in the U.S. since 2015. Police are also increasingly savvy about covering their tracks so as not to alert suspects of their interest.

That’s the backdrop for recent revelations that the Trump-era U.S. Justice Department sought data from Apple, Microsoft and Google about members of Congress, their aides and news reporters in leak investigations — then pursued court orders that blocked those companies from informing their targets.

In just the first half of 2020 — the most recent data available — Apple, Google, Facebook and Microsoft together fielded more than 112,000 data requests from local, state and federal officials. The companies agreed to hand over some data in 85% of those cases. Facebook, including its Instagram service, accounted for the largest number of disclosures.

Consider Newport, a coastal city of 24,000 residents that attracts a flood of summer tourists. Fewer than 100 officers patrol the city — but they make multiple requests a week for online data from tech companies.

That’s because most crimes — from larceny and financial scams to a recent fatal house party stabbing at a vacation rental booked online — can be at least partly traced on the internet. Tech providers, especially social media platforms, offer a “treasure trove of information” that can help solve them, said Lt. Robert Salter, a supervising police detective in Newport.

Fired by Bot at Amazon: ‘It’s You Against the Machine’

Contract drivers say algorithms terminate them by email—even when they have done nothing wrong.

Stephen Normandin spent almost four years racing around Phoenix delivering packages as a contract driver for Amazon.com Inc. Then one day, he received an automated email. The algorithms tracking him had decided he wasn’t doing his job properly.

The 63-year-old Army veteran was stunned. He’d been fired by a machine.

Normandin says Amazon punished him for things beyond his control that prevented him from completing his deliveries, such as locked apartment complexes. Amazon assigned him some pre-dawn deliveries at apartment complexes when their gates were still locked, a common complaint among Flex drivers. The algorithm instructs drivers in such instances to deliver packages to the main office, but that wasn’t open either. Normandin called the customer as instructed—a long shot because most people don’t answer calls from unfamiliar numbers, especially early morning. He called driver support, which couldn’t get through to the customer either. Meanwhile, the clock was ticking, and the algorithm was taking note.

When Ryan Cope was deactivated in 2019, he didn’t bother arguing or consider paying for arbitration. By then, Cope had already decided there was no way he could meet the algorithms’ demands. Driving miles along winding dirt roads outside Denver in the snow, he often shook his head in disbelief that Amazon expected the customer to get the package within two hours.

When drivers do challenge poor ratings, they can’t tell if they’re communicating with real people. Responses often include just a first name or no name at all, and the replies typically apply to a variety of situations rather than a specific problem. Even if a name is attached, a machine most likely generated the first few email responses, according to people familiar with the matter.

When human managers get involved, they typically conduct a hasty review—if they do one at all—because they must meet their own performance standards. A former employee at a driver support call center said dozens of part-time seasonal workers with little training were assigned to oversee issues for millions of drivers.

Data Centres Exacerbate Droughts

A data center can easily use up to 1.25 million gallons of water each day — and “More data centers are being built every day by some of America’s largest technology companies,” reports NBC News, “including Amazon, Microsoft and Google and used by millions of customers.”

Almost 40 percent of them are in the United States, and Amazon, Google and Microsoft account for more than half of the total. The U.S. also has at least 1,800 “colocation” data centers, warehouses filled with a variety of smaller companies’ server hardware that share the same cooling system, electricity and security, according to Data Center Map. They are typically smaller than hyperscale data centers but, research has shown, more resource intensive as they maintain a variety of computer systems operating at different levels of efficiency.

Many data center operators are drawn to water-starved regions in the West, in part due to the availability of solar and wind energy. Researchers at Virginia Tech estimate that one-fifth of data centers draw water from moderately to highly stressed watersheds, mostly in the Western United States, according to a paper published in April…

The growth in the industry shows no signs of slowing. The research company Gartner predicts that spending on global data center infrastructure will reach $200 billion this year, an increase of 6 percent from 2020, followed by 3-4 percent annually over the next three years. This growth comes at a time of record temperatures and drought in the United States, particularly in the West. “The typical data center uses about 3-5 million gallons of water per day — the same amount of water as a city of 30,000-50,000 people,” said Venkatesh Uddameri, professor and director of the Water Resources Center at Texas Tech University. Although these data centers have become much more energy and water efficient over the last decade, and don’t use as much water as other industries such as agriculture, this level of water use can still create potential competition with local communities over the water supply in areas where water is scarce, he added…

Sergio Loureiro, vice president of core operations for Microsoft, said that the company has pledged to be “water positive” by 2030, which means it plans to replenish more water than it consumes globally. This includes reducing the company’s water use and investing in community replenishment and conservation projects near where it builds facilities.

Amazon did not respond to requests for comment.

Despite pandemic shutdowns, carbon dioxide and methane surged in 2020

Carbon dioxide levels are now higher than at anytime in the past 3.6 million years. Levels of the two most important anthropogenic greenhouse gases, carbon dioxide and methane, continued their unrelenting rise in 2020 despite the economic slowdown caused by the coronavirus pandemic response, NOAA announced today. The global surface average for carbon dioxide (CO2), calculated from measurements collected at NOAA’s remote sampling locations, was 412.5 parts per million (ppm) in 2020, rising by 2.6 ppm during the year. The global rate of increase was the fifth-highest in NOAA’s 63-year record, following 1987, 1998, 2015 and 2016. The annual mean at NOAA’s Mauna Loa Observatory in Hawaii was 414.4 ppm during 2020.

The economic recession was estimated to have reduced carbon emissions by about 7 percent during 2020. Without the economic slowdown, the 2020 increase would have been the highest on record, according to Pieter Tans, senior scientist at NOAA’s Global Monitoring Laboratory. Since 2000, the global CO2 average has grown by 43.5 ppm, an increase of 12 percent.

The atmospheric burden of CO2 is now comparable to where it was during the Mid-Pliocene Warm Period around 3.6 million years ago, when concentrations of carbon dioxide ranged from about 380 to 450 parts per million. During that time sea level was about 78 feet higher than today, the average temperature was 7 degrees Fahrenheit higher than in pre-industrial times, and studies indicate large forests occupied areas of the Arctic that are now tundra.

Also: https://www.cbsnews.com/news/climate-change-carbon-dioxide-highest-level-million-years/

TikTok sued for billions over use of children’s data

Lawyers will allege that TikTok takes children’s personal information, including phone numbers, videos, exact location and biometric data, without sufficient warning, transparency or the necessary consent required by law, and without children or parents knowing what is being done with that information. TikTok has more than 800 million users worldwide and parent firm ByteDance made billions in profits last year, with the vast majority of that coming via advertising revenue.

US Intelligence may partner with private firms to monitor “extremist chatter” online

The Biden administration is considering using outside firms to track extremist chatter by Americans online, an effort that would expand the government’s ability to gather intelligence but could draw criticism over surveillance of US citizens. The Department of Homeland Security is limited in how it can monitor citizens online without justification and is banned from activities like assuming false identities to gain access to private messaging apps used by extremist groups such as the Proud Boys or Oath Keepers. Instead, federal authorities can only browse through unprotected information on social media sites like Twitter and Facebook and other open online platforms. A source familiar with the effort said it is not about decrypting data but rather using outside entities who can legally access these private groups to gather large amounts of information that could help DHS identify key narratives as they emerge. The plan being discussed inside DHS, according to multiple sources, would, in effect, allow the department to circumvent those limits.

Even as the DHS eyes a more robust use of its intelligence authorities, it continues to face fierce scrutiny on Capitol Hill over its handling of the Portland protests last summer — raising the possibility that at least some lawmakers will push back on the effort. The department — then led by Trump appointees but staffed by career officials, some of whom remain on the job — collected and disseminated open source reports on U.S. journalists who were publicly reporting on the protests.

Samsung Lost More than $268 Million During Power Shutdown in Texas

Samsung executives said the company’s semiconductor business saw profits fall in the first quarter, mainly due to disruptions and product losses caused by the shutdown. Samsung’s Austin fab was offline for more than a month after it was shut down due to power outages during the freeze… About 71,000 wafers were affected by production disruptions, said Han Jinman, executive vice-president of Samsung’s memory chip business. He estimated the wafer loss is equivalent to $268 million to $357 million.

Semiconductor fabs are typically operational 24 hours a day for years on end. Each batch of wafers — a thin slice of semiconductor used for the fabrication of integrated circuits — can take 45 to 60 days to make, so a shutdown of any length can mean a loss of weeks of work. Restoring a fab is also a complicated process, and even in the best of circumstances can take a week… NXP Semiconductors was also among the facilities that were shut down in February, as its two Austin fabrication facilities were offline for nearly a month. In March, the company estimated the shutdown would result in a $100 million loss in revenue and a month of wafer production…

Jinman said Samsung is working with the state, municipal government and local utility companies to find solutions to prevent similar shutdowns in the future.

Amazon had sales income of €44bn in Europe in 2020 but paid no corporation tax

Corporate filings in Luxembourg revealed that the company collected record sales income of €44bn (£38bn) in Europe last year but did not have to pay any corporation tax to the Grand Duchy.

Accounts for Amazon EU Sarl, through which it sells products to hundreds of millions of households in the UK and across Europe, show that despite collecting record income, the Luxembourg unit made a €1.2bn loss and therefore paid no tax.

In fact the unit was granted €56m in tax credits it can use to offset any future tax bills should it turn a profit. The company has €2.7bn worth of carried forward losses stored up, which can be used against any tax payable on future profits.

“Amazon’s revenues have soared under the pandemic while our high streets struggle, yet it continues to shift its profits to tax havens like Luxembourg to avoid paying its fair share of tax. These big digital companies all rely on our public services, our infrastructure…