Resources

Man flies to Florida to attack another player over an online gaming dispute

An online gaming dispute made its way to the real world when a New Jersey man flew to Florida to attack another player with a hammer, authorities said.

Edward Kang, 20, is charged with attempted second-degree murder and armed burglary with a mask, according to Nassau County court records. He was arrested early Sunday morning.

Kang and the victim, another young man around the same age as Kang, had never met in real life, but they both played ArcheAge, a medieval fantasy massively multiplayer online role-playing game, Leeper said. The game’s publisher announced in April that it would be shutting down servers in Europe and North America on June 27, citing a declining number of active players.

Kang flew from Newark, New Jersey, to Jacksonville, Florida, last Thursday after telling his mother that he was going to visit a friend that he had met while playing a video game, officials said. Officials didn’t say how Kang learned where the victim lives. Upon arrival, Kang took an Uber to a hotel in Fernandina Beach, about 35 miles north of Jacksonville, and then bought a hammer at a local hardware store, deputies said.

Kang went to the victim’s Fernandina Beach home, which was unlocked, around 2 a.m. Sunday, authorities said. The victim was walking out of his bedroom when he was confronted by Kang, who hit him on the head with the hammer, officials said. The two struggled as the victim called for help. His stepfather responded and helped to restrain Kang until police arrived.

The victim suffered several head wounds that were not considered life-threatening, officials said. He received staples at the hospital.

17

The satellites using radar to peer at Earth in minute detail

Synthetic aperture radar (SAR) allows satellites to bounce radar signals off the ground and interpret the echo – and it can even peer through clouds.

Clouds cover around two-thirds of the world at any one time, preventing conventional satellites from seeing much of the planet.

But now a declassified technology known as synthetic aperture radar (SAR) can be installed on satellites to “see” the Earth’s surface in the dark, through the clouds (or the smoke of wildfires), to provide a constant unobscured view of our planet, and show changes on the Earth’s surface in great detail.

Previously used to equip only a relatively small number of large commercial satellites, this technology is now being combined with constellations of inexpensive nanosatellites in low-Earth orbit by start-ups such as Iceye and Capella Space. The goal is to provide round-the-clock observation of nearly anywhere on the planet for everyone from non-governmental organisations, to military customers.

27

1 in 9 American Kids Diagnosed With ADHD, New Study Finds

A new report from the Centers for Disease Control and Prevention that calls attention-deficit/hyperactivity disorder an “expanding public health concern.”

Researchers found that in 2022, 7.1 million kids and adolescents in the U.S. had received an ADHD diagnosis — a million more children than in 2016. That jump in diagnoses was not surprising, given that the data was collected during the pandemic, says Melissa Danielson, a statistician with the CDC’s National Center on Birth Defects and Developmental Disabilities and the study’s lead author. She notes that other studies have found that many children experienced heightened stress, depression and anxiety during the pandemic. “A lot of those diagnoses… might have been the result of a child being assessed for a different diagnosis, something like anxiety or depression, and their clinician identifying that the child also had ADHD,” Danielson says. The increase in diagnoses also comes amid growing awareness of ADHD — and the different ways that it can manifest in children…

The study, which appears in the Journal of Clinical Child & Adolescent Psychology, was based on data from the National Survey of Children’s Health, which gathers detailed information from parents.

41

Deep Fake Scams Growing in Global Frequency and Sophistication, Victim Warns

In an elaborate scam in January, “a finance worker, was duped into attending a video call with people he believed were the chief financial officer and other members of staff,” remembers CNN. But Hong Kong police later said that all of them turned out to be deepfake re-creations which duped the employee into transferring $25 million.
According to police, the worker had initially suspected he had received a phishing email from the company’s UK office, as it specified the need for a secret transaction to be carried out. However, the worker put aside his doubts after the video call because other people in attendance had looked and sounded just like colleagues he recognized.

Now the targeted company has been revealed: a major engineering consulting firm, with 18,500 employees across 34 offices:
A spokesperson for London-based Arup told CNN on Friday that it notified Hong Kong police in January about the fraud incident, and confirmed that fake voices and images were used. “Unfortunately, we can’t go into details at this stage as the incident is still the subject of an ongoing investigation. However, we can confirm that fake voices and images were used,” the spokesperson said in an emailed statement. “Our financial stability and business operations were not affected and none of our internal systems were compromised,” the person added…

Authorities around the world are growing increasingly concerned about the sophistication of deepfake technology and the nefarious uses it can be put to. In an internal memo seen by CNN, Arup’s East Asia regional chairman, Michael Kwok, said the “frequency and sophistication of these attacks are rapidly increasing globally, and we all have a duty to stay informed and alert about how to spot different techniques used by scammers.”

The company’s global CIO emailed CNN this statement. “Like many other businesses around the globe, our operations are subject to regular attacks, including invoice fraud, phishing scams, WhatsApp voice spoofing, and deepfakes.

“What we have seen is that the number and sophistication of these attacks has been rising sharply in recent months.”

40

Microplastics Found in Every Human Testicle in Study

Microplastics have been found in human testicles, with researchers saying the discovery might be linked to declining sperm counts in men. From a report:
The scientists tested 23 human testes, as well as 47 testes from pet dogs. They found microplastic pollution in every sample. The human testicles had been preserved and so their sperm count could not be measured. However, the sperm count in the dogs’ testes could be assessed and was lower in samples with higher contamination with PVC. The study demonstrates a correlation but further research is needed to prove microplastics cause sperm counts to fall.

Sperm counts in men have been falling for decades, with chemical pollution such as pesticides implicated by many studies. Microplastics have also recently been discovered in human blood, placentas and breast milk, indicating widespread contamination of people’s bodies. The impact on health is as yet unknown but microplastics have been shown to cause damage to human cells in the laboratory. Vast amounts of plastic waste are dumped in the environment and microplastics have polluted the entire planet, from the summit of Mount Everest to the deepest oceans. People are known to consume the tiny particles via food and water as well as breathing them in. The particles could lodge in tissue and cause inflammation, as air pollution particles do, or chemicals in the plastics could cause harm. In March, doctors warned of potentially life-threatening effects after finding a substantially raised risk of stroke, heart attack and earlier death in people whose blood vessels were contaminated with microscopic plastics.

43

38% of Webpages That Existed in 2013 Are No Longer Accessible a Decade Later

A new Pew Research Center analysis shows just how fleeting online content actually is:
1. A quarter of all webpages that existed at one point between 2013 and 2023 are no longer accessible, as of October 2023. In most cases, this is because an individual page was deleted or removed on an otherwise functional website.
2. For older content, this trend is even starker. Some 38% of webpages that existed in 2013 are not available today, compared with 8% of pages that existed in 2023.

This “digital decay” occurs in many different online spaces. We examined the links that appear on government and news websites, as well as in the “References” section of Wikipedia pages as of spring 2023. This analysis found that:
1. 23% of news webpages contain at least one broken link, as do 21% of webpages from government sites. News sites with a high level of site traffic and those with less are about equally likely to contain broken links. Local-level government webpages (those belonging to city governments) are especially likely to have broken links.
2. 54% of Wikipedia pages contain at least one link in their “References” section that points to a page that no longer exists.[…]

27

Police Found Ways to Use Facial Recognition Tech After Their Cities Banned It

As cities and states push to restrict the use of facial recognition technologies, some police departments have quietly found a way to keep using the controversial tools: asking for help from other law enforcement agencies that still have access. Officers in Austin and San Francisco — two of the largest cities where police are banned from using the technology — have repeatedly asked police in neighboring towns to run photos of criminal suspects through their facial recognition programs, according to a Washington Post review of police documents…

Austin police officers received the results of at least 13 face searches from a neighboring police department since the city’s 2020 ban — and appeared to get hits on some of them, according to documents obtained by The Post through public records requests and sources who shared them on the condition of anonymity. “That’s him! Thank you very much,” one Austin police officer wrote in response to an array of photos sent to him by an officer in Leander, Tex., who ran a facial recognition search, documents show. The man displayed in the pictures, John Curry Jr., was later charged with aggravated assault for allegedly charging toward someone with a knife, and is currently in jail awaiting trial. Curry’s attorney declined to comment.

“Police officers’ efforts to skirt these bans have not been previously reported and highlight the challenge of reining in police use of facial recognition,” the article concludes.

It also points out that the technology “has played a role in the wrongful arrests of at least seven innocent Americans,” according to the lawsuits they filed after charges against them were dismissed.

12

Robot Dogs Armed With AI-aimed Rifles Undergo US Marines Special Ops Evaluation

The United States Marine Forces Special Operations Command (MARSOC) is currently evaluating a new generation of robotic “dogs” developed by Ghost Robotics, with the potential to be equipped with gun systems from defense tech company Onyx Industries, reports The War Zone.

While MARSOC is testing Ghost Robotics’ quadrupedal unmanned ground vehicles (called “Q-UGVs” for short) for various applications, including reconnaissance and surveillance, it’s the possibility of arming them with weapons for remote engagement that may draw the most attention. But it’s not unprecedented: The US Marine Corps has also tested robotic dogs armed with rocket launchers in the past.

MARSOC is currently in possession of two armed Q-UGVs undergoing testing, as confirmed by Onyx Industries staff, and their gun systems are based on Onyx’s SENTRY remote weapon system (RWS), which features an AI-enabled digital imaging system and can automatically detect and track people, drones, or vehicles, reporting potential targets to a remote human operator that could be located anywhere in the world. The system maintains a human-in-the-loop control for fire decisions, and it cannot decide to fire autonomously. On LinkedIn, Onyx Industries shared a video of a similar system in action.

In a statement to The War Zone, MARSOC states that weaponized payloads are just one of many use cases being evaluated. MARSOC also clarifies that comments made by Onyx Industries to The War Zone regarding the capabilities and deployment of these armed robot dogs “should not be construed as a capability or a singular interest in one of many use cases during an evaluation.”

32

Companies Are So Desperate For Data Centers They’re Leasing Them Before They’re Even Built

In the first quarter of 2024, what amounts to about half of the existing supply of data center megawattage in the US is under construction, according to real estate services firm CBRE. And 84% of that is already leased. Typically that rate had been about 50% the last few years — already notably higher than other real estate classes. “I’m astonished and impressed by the demand for facilities yet to be fully constructed,” CBRE Data Center Research Director Gordon Dolven told Sherwood.

That advanced interest means that despite the huge amount of construction, there’s still going to be a shortage of data centers to meet demand. In other words, data center vacancy rates are staying low and rents high. Nationwide the vacancy rates are near record lows of 3.7% and average asking rent for data centers was up 19% year over year, according to CBRE. It was up 42% in Northern Virginia, where many data centers are located. These sorts of price jumps are “unprecedented” compared with other types of real estate. For comparison, rents for industrial and logistics real estate, another hot asset class used in e-commerce, is expected to go up 8% this year.

39

Humans Now Share the Web Equally With Bots

Humans now share the web equally with bots, according to a major new report — as some fear that the internet is dying. In recent months, the so-called “dead internet theory” has gained new popularity. It suggests that much of the content online is in fact automatically generated, and that the number of humans on the web is dwindling in comparison with bot accounts. Now a new report from cyber security company Imperva suggests that it is increasingly becoming true. Nearly half, 49.6 per cent, of all internet traffic came from bots last year, its “Bad Bot Report” indicates. That is up 2 percent in comparison with last year, and is the highest number ever seen since the report began in 2013. In some countries, the picture is worse. In Ireland, 71 per cent of internet traffic is automated, it said.

Some of that rise is the result of the adoption of generative artificial intelligence and large language models. Companies that build those systems use bots scrape the internet and gather data that can then be used to train them. Some of those bots are becoming increasingly sophisticated, Imperva warned. More and more of them come from residential internet connections, which makes them look more legitimate. “Automated bots will soon surpass the proportion of internet traffic coming from humans, changing the way that organizations approach building and protecting their websites and applications,” said Nanhi Singh, general manager for application security at Imperva. “As more AI-enabled tools are introduced, bots will become omnipresent.”

40

EU Opens Probe of TikTok Lite, Citing Concerns About Addictive Design

The European Union has opened a second formal investigation into TikTok under its Digital Services Act (DSA), an online governance and content moderation framework. The investigation centers around TikTok Lite’s “Task and Reward” feature that may harm mental health, especially among minors, by promoting addictive behavior. TechCrunch reports:
The Commission also said it’s minded to impose interim measures that could force the company to suspend access to the TikTok Lite app in the EU while it investigates concerns the app poses mental health risks to users. Although the EU has given TikTok until April 24 to argue against the measure — meaning the app remains accessible for now. Penalties for confirmed violations of the DSA can reach up to 6% of global annual turnover. So ByeDance, TikTok’s parent, could face hefty fines if EU enforcers do end up deciding it has broken the law.

The EU’s first TikTok probe covers multiple issues including the protection of minors, advertising transparency, data access for researchers, and the risk management of addictive design and harmful content. Hence it said the latest investigation will specifically focus on TikTok Lite, a version of the video sharing platform which launched earlier this month in France and Spain and includes a mechanism that allows users to earn points for doing things like watching or liking videos. Points earned through TikTok Lite can be exchanged for things like Amazon gift vouchers or TikTok’s own digital currency for gifting to creators. The Commission is worried this so-called “task and reward” feature could negatively impact the mental health of young users by “stimulating addictive behavior.”

The EU wrote that the second probe will focus on TikTok’s compliance with the DSA obligation to conduct and submit a risk assessment report prior to the launch of the “Task and Reward Lite” program, with a particular focus on negative effects on mental health, including minors’ mental health. It also said it will look into measures taken by TikTok to mitigate those risks. In a press release announcing the action, the EU said ByeDance failed to produce a risk assessment about the feature which it had asked to see last week — when it gave the company 24 hours to produce the document. Since it failed to submit the risk assessment paperwork on April 18 the Commission wrote that it suspects a “prima facie infringement of the DSA.”

36

A Chess Formula Is Taking Over the World

In October 2003, Mark Zuckerberg created his first viral site: not Facebook, but FaceMash. Then a college freshman, he hacked into Harvard’s online dorm directories, gathered a massive collection of students’ headshots, and used them to create a website on which Harvard students could rate classmates by their attractiveness, literally and figuratively head-to-head. The site, a mean-spirited prank recounted in the opening scene of The Social Network, got so much traction so quickly that Harvard shut down his internet access within hours. The math that powered FaceMash — and, by extension, set Zuckerberg on the path to building the world’s dominant social-media empire — was reportedly, of all things, a formula for ranking chess players: the Elo system.

Fundamentally, what an Elo rating does is predict the outcome of chess matches by assigning every player a number that fluctuates based purely on performance. If you beat a slightly higher-ranked player, your rating goes up a little, but if you beat a much higher-ranked player, your rating goes up a lot (and theirs, conversely, goes down a lot). The higher the rating, the more matches you should win. That is what Elo was designed for, at least. FaceMash and Zuckerberg aside, people have deployed Elo ratings for many sports — soccer, football, basketball — and for domains as varied as dating, finance, and primatology. If something can be turned into a competition, it has probably been Elo-ed. Somehow, a simple chess algorithm has become an all-purpose tool for rating everything. In other words, when it comes to the preferred way to rate things, Elo ratings have the highest Elo rating. […]

Elo ratings don’t inherently have anything to do with chess. They’re based on a simple mathematical formula that works just as well for any one-on-one, zero-sum competition — which is to say, pretty much all sports. In 1997, a statistician named Bob Runyan adapted the formula to rank national soccer teams — a project so successful that FIFA eventually adopted an Elo system for its official rankings. Not long after, the statistician Jeff Sagarin applied Elo to rank NFL teams outside their official league standings. Things really took off when the new ESPN-owned version of Nate Silver’s 538 launched in 2014 and began making Elo ratings for many different sports. Some sports proved trickier than others. NBA basketball in particular exposed some of the system’s shortcomings, Neil Paine, a stats-focused sportswriter who used to work at 538, told me. It consistently underrated heavyweight teams, for example, in large part because it struggled to account for the meaninglessness of much of the regular season and the fact that either team might not be trying all that hard to win a given game. The system assumed uniform motivation across every team and every game. Pretty much anything, it turns out, can be framed as a one-on-one, zero-sum game.
Arpad Emmerich Elo, creator of the Elo rating system, understood the limitations of his invention. “It is a measuring tool, not a device of reward or punishment,” he once remarked. “It is a means to compare performances, assess relative strength, not a carrot waved before a rabbit, or a piece of candy given to a child for good behavior.”

36

Taser Company Axon Is Selling AI That Turns Body Cam Audio Into Police Reports

Axon on Tuesday announced a new tool called Draft One that uses artificial intelligence built on OpenAI’s GPT-4 Turbo model to transcribe audio from body cameras and automatically turn it into a police report. Axon CEO Rick Smith told Forbes that police officers will then be able to review the document to ensure accuracy. From the report:
Axon claims one early tester of the tool, Fort Collins Colorado Police Department, has seen an 82% decrease in time spent writing reports. “If an officer spends half their day reporting, and we can cut that in half, we have an opportunity to potentially free up 25% of an officer’s time to be back out policing,” Smith said. These reports, though, are often used as evidence in criminal trials, and critics are concerned that relying on AI could put people at risk by depending on language models that are known to “hallucinate,” or make things up, as well as display racial bias, either blatantly or unconsciously.

“It’s kind of a nightmare,” said Dave Maass, surveillance technologies investigations director at the Electronic Frontier Foundation. “Police, who aren’t specialists in AI, and aren’t going to be specialists in recognizing the problems with AI, are going to use these systems to generate language that could affect millions of people in their involvement with the criminal justice system. What could go wrong?” Smith acknowledged there are dangers. “When people talk about bias in AI, it really is: Is this going to exacerbate racism by taking training data that’s going to treat people differently?” he told Forbes. “That was the main risk.”

Smith said Axon is recommending police don’t use the AI to write reports for incidents as serious as a police shooting, where vital information could be missed. “An officer-involved shooting is likely a scenario where it would not be used, and I’d probably advise people against it, just because there’s so much complexity, the stakes are so high.” He said some early customers are only using Draft One for misdemeanors, though others are writing up “more significant incidents,” including use-of-force cases. Axon, however, won’t have control over how individual police departments use the tools.

46

Can I phone a friend? How cops circumvent face recognition bans

Police in multiple major US cities have figured out a trick to circumvent their cities’ bans on facial recognition technology. Just ask a friend in a city without any such restrictions to do it for you.

It’s not immediately clear how widespread such side-stepping of facial recognition restrictions in the US may be. According to the Washington Post’s review of police documents in Austin, Texas, and San Francisco, however, it’s definitely something that’s gone on in at least two major US metropolitan areas.

WaPo reported over the weekend that SFPD and APD have both sought help from neighboring cop shops on occasion, though with different levels of success. San Francisco cops have reportedly sought facial recognition searches from other agencies on five occasions since 2019, but never got a match back; Austin police, on the other hand, have sent at least 13 searches since 2020.

“Some” of the searches from APD returned hits, and multiple suspects in Austin have been arrested and charged as a result of the searches, WaPo said.

38

AI Will Suck Up 500% More Power in UK in 10 Years, Grid CEO Says

Electricity demand from UK data centers will jump sixfold over the next 10 years as a boom in AI requires increased computing power, according to the head of National Grid.

That will ramp up pressure on the country’s electricity network, which must move vast quantities of renewable energy from as far away as Scottish wind farms to data centers around London. And it’s a grid already under strain from the accelerating electrification of home heating, transportation and industries.

“Future growth in foundational technologies like artificial intelligence and quantum computing will mean larger-scale, energy-intensive computing infrastructure,” National Grid Chief Executive Officer John Pettigrew said Tuesday at a conference in Oxford. It’s an outlook replicated in many other countries, which are grappling with how to fund the massive spending required to expand capacity. Global electricity demand from data centers, AI and cryptocurrencies may more than double over the next three years, according to International Energy Agency forecasts.

70

Screen Time Robs Average Toddler of Hearing 1,000 Words Spoken By Adult a Day, Study Finds

The average toddler is missing out on hearing more than 1,000 words spoken by an adult each day due to screen time, setting back their language skills, a first-of-its kind study has found. The research, published on Tuesday in the Journal of the American Medical Association (Jama) Pediatrics, tracked 220 Australian families over two years to measure the relationship between family screen use and children’s language environment. Families recorded all the audio around their child using advanced speech recognition technology over a 16-hour period on an average day at home. They repeated this process every six months between the ages of 12 and 36 months. The lead researcher, Dr Mary Brushe from the Telethon Kids Institute, said: “The technology we use is essentially like a Fitbit, but instead of counting the number of steps, this device counts the number of words spoken by, to and around the child.” The device also picked up electronic noise, which the researchers analyzed to calculate screen time.

The researchers found young children’s exposure to screens including TVs and phones was interfering with their language opportunities, with the association most pronounced at three years of age. For every extra minute of screen time, the three-year-olds in the study were hearing seven fewer words, speaking five fewer words themselves and engaging in one less conversation. The study found the average three-year-old in the study was exposed to two hours and 52 minutes of screen time a day. Researchers estimated this led to those children being exposed to 1,139 fewer adult words, 843 fewer child words and 194 fewer conversations. Because the study couldn’t capture parents’ silent phone use, including reading emails, texting or quietly scrolling through websites or social media, Brushe said they might have underestimated how much screen usage is affecting children.

A language-rich home environment was critical in supporting infants and toddlers’ language development, Brushe said. While some educational children’s shows were designed to help children’s language skills, very young kids in the age group of the study could struggle to translate television shows into their own life, she said. This study did not differentiate between whether children were watching high- or low-quality screen content.

46

The FBI Is Using Push Notifications To Track Criminals

The Post did a little digging into court records and found evidence of at least 130 search warrants filed by the feds for push notification data in cases spanning 14 states. In those cases, FBI officials asked tech companies like Google, Apple, and Facebook to fork over data related to a suspect’s mobile notifications, then used the data to implicate the suspect in criminal behavior linked to a particular app, even though many of those apps were supposedly anonymous communication platforms, like Wickr.

How exactly is this possible? Push notifications, which are provided by a mobile operating system provider, include embedded metadata that can be examined to understand the use of the mobile apps on a particular phone. Apps come laced with a quiet identifier, a “push token,” which is stored on the corporate servers of a company like Apple or another phone manufacturer after a user signs up to use a particular app. Those tokens can later be used to identify the person using the app, based on the information associated with the device on which the app was downloaded. Even turning off push notifications on your device doesn’t necessarily disable this feature, experts contend. […]

If finding new ways to catch pedophiles and terrorists doesn’t seem like the worst thing in the world, the Post article highlights the voices of critics who fear that this kind of mobile data could be used to track people who have not committed serious crimes — like political activists or women seeking abortions in states where the procedure has been restricted.

58

How the Pentagon Learned To Use Targeted Ads To Find Its Targets

In 2019, a government contractor and technologist named Mike Yeagley began making the rounds in Washington, DC. He had a blunt warning for anyone in the country’s national security establishment who would listen: The US government had a Grindr problem. A popular dating and hookup app, Grindr relied on the GPS capabilities of modern smartphones to connect potential partners in the same city, neighborhood, or even building. The app can show how far away a potential partner is in real time, down to the foot. But to Yeagley, Grindr was something else: one of the tens of thousands of carelessly designed mobile phone apps that leaked massive amounts of data into the opaque world of online advertisers. That data, Yeagley knew, was easily accessible by anyone with a little technical know-how. So Yeagley — a technology consultant then in his late forties who had worked in and around government projects nearly his entire career — made a PowerPoint presentation and went out to demonstrate precisely how that data was a serious national security risk.

As he would explain in a succession of bland government conference rooms, Yeagley was able to access the geolocation data on Grindr users through a hidden but ubiquitous entry point: the digital advertising exchanges that serve up the little digital banner ads along the top of Grindr and nearly every other ad-supported mobile app and website. This was possible because of the way online ad space is sold, through near-instantaneous auctions in a process called real-time bidding. Those auctions were rife with surveillance potential. You know that ad that seems to follow you around the internet? It’s tracking you in more ways than one. In some cases, it’s making your precise location available in near-real time to both advertisers and people like Mike Yeagley, who specialized in obtaining unique data sets for government agencies.

Working with Grindr data, Yeagley began drawing geofences — creating virtual boundaries in geographical data sets — around buildings belonging to government agencies that do national security work. That allowed Yeagley to see what phones were in certain buildings at certain times, and where they went afterwards. He was looking for phones belonging to Grindr users who spent their daytime hours at government office buildings. If the device spent most workdays at the Pentagon, the FBI headquarters, or the National Geospatial-Intelligence Agency building at Fort Belvoir, for example, there was a good chance its owner worked for one of those agencies. Then he started looking at the movement of those phones through the Grindr data. When they weren’t at their offices, where did they go? A small number of them had lingered at highway rest stops in the DC area at the same time and in proximity to other Grindr users — sometimes during the workday and sometimes while in transit between government facilities. For other Grindr users, he could infer where they lived, see where they traveled, even guess at whom they were dating.

Intelligence agencies have a long and unfortunate history of trying to root out LGBTQ Americans from their workforce, but this wasn’t Yeagley’s intent. He didn’t want anyone to get in trouble. No disciplinary actions were taken against any employee of the federal government based on Yeagley’s presentation. His aim was to show that buried in the seemingly innocuous technical data that comes off every cell phone in the world is a rich story — one that people might prefer to keep quiet. Or at the very least, not broadcast to the whole world. And that each of these intelligence and national security agencies had employees who were recklessly, if obliviously, broadcasting intimate details of their lives to anyone who knew where to look. As Yeagley showed, all that information was available for sale, for cheap. And it wasn’t just Grindr, but rather any app that had access to a user’s precise location — other dating apps, weather apps, games. Yeagley chose Grindr because it happened to generate a particularly rich set of data and its user base might be uniquely vulnerable.
The report goes into great detail about how intelligence and data analysis techniques, notably through a program called Locomotive developed by PlanetRisk, enabled the tracking of mobile devices associated with Russian President Vladimir Putin’s entourage. By analyzing commercial adtech data, including precise geolocation information collected from mobile advertising bid requests, analysts were able to monitor the movements of phones that frequently accompanied Putin, indicating the locations and movements of his security personnel, aides, and support staff.

This capability underscored the surveillance potential of commercially available data, providing insights into the activities and security arrangements of high-profile individuals without directly compromising their personal devices.

54

How AI is Taking Water From the Desert

Microsoft built two datacenters west of Phoenix, with plans for seven more (serving, among other companies, OpenAI). “Microsoft has been adding data centers at a stupendous rate, spending more than $10 billion on cloud-computing capacity in every quarter of late,” writes the Atlantic. “One semiconductor analyst called this “the largest infrastructure buildout that humanity has ever seen.”

But is this part of a concerning trend?
Microsoft plans to absorb its excess heat with a steady flow of air and, as needed, evaporated drinking water. Use of the latter is projected to reach more than 50 million gallons every year. That might be a burden in the best of times. As of 2023, it seemed absurd. Phoenix had just endured its hottest summer ever, with 55 days of temperatures above 110 degrees. The weather strained electrical grids and compounded the effects of the worst drought the region has faced in more than a millennium. The Colorado River, which provides drinking water and hydropower throughout the region, has been dwindling. Farmers have already had to fallow fields, and a community on the eastern outskirts of Phoenix went without tap water for most of the year… [T]here were dozens of other facilities I could visit in the area, including those run by Apple, Amazon, Meta, and, soon, Google. Not too far from California, and with plenty of cheap land, Greater Phoenix is among the fastest-growing hubs in the U.S. for data centers….

Microsoft, the biggest tech firm on the planet, has made ambitious plans to tackle climate change. In 2020, it pledged to be carbon-negative (removing more carbon than it emits each year) and water-positive (replenishing more clean water than it consumes) by the end of the decade. But the company also made an all-encompassing commitment to OpenAI, the most important maker of large-scale AI models. In so doing, it helped kick off a global race to build and deploy one of the world’s most resource-intensive digital technologies. Microsoft operates more than 300 data centers around the world, and in 2021 declared itself “on pace to build between 50 and 100 new datacenters each year for the foreseeable future….”

Researchers at UC Riverside estimated last year… that global AI demand could cause data centers to suck up 1.1 trillion to 1.7 trillion gallons of freshwater by 2027. A separate study from a university in the Netherlands, this one peer-reviewed, found that AI servers’ electricity demand could grow, over the same period, to be on the order of 100 terawatt hours per year, about as much as the entire annual consumption of Argentina or Sweden… [T]ensions over data centers’ water use are cropping up not just in Arizona but also in Oregon, Uruguay, and England, among other places in the world.

The article points out that Microsoft “is transitioning some data centers, including those in Arizona, to designs that use less or no water, cooling themselves instead with giant fans.” And an analysis (commissioned by Microsoft) on the impact of one building said it would use about 56 million gallons of drinking water each year, equivalent to the amount used by 670 families, according to the article. “In other words, a campus of servers pumping out ChatGPT replies from the Arizona desert is not about to make anyone go thirsty.”

68

Public trust in AI is sinking across the board

Globally, trust in AI companies has dropped to 53%, down from 61% five years ago. In the U.S., trust has dropped 15 percentage points (from 50% to 35%) over the same period. Trust in AI is low across political lines. Democrats trust in AI companies is 38%, independents are at 25% and Republicans at 24%. Tech is losing its lead as the most trusted sector. Eight years ago, technology was the leading industry in trust in 90% of the countries Edelman studies. Today, it is the most trusted in only half of countries.

People in developing countries are more likely to embrace AI than those in developed ones. Respondents in France, Canada, Ireland, UK, U.S., Germany, Australia, the Netherlands and Sweden reject the growing use of AI by a three-to-one margin, Edelman said. By contrast, acceptance outpaces resistance by a wide margin in developing markets such as Saudi Arabia, India, China, Kenya, Nigeria and Thailand.

64