Resources

A Chess Formula Is Taking Over the World

In October 2003, Mark Zuckerberg created his first viral site: not Facebook, but FaceMash. Then a college freshman, he hacked into Harvard’s online dorm directories, gathered a massive collection of students’ headshots, and used them to create a website on which Harvard students could rate classmates by their attractiveness, literally and figuratively head-to-head. The site, a mean-spirited prank recounted in the opening scene of The Social Network, got so much traction so quickly that Harvard shut down his internet access within hours. The math that powered FaceMash — and, by extension, set Zuckerberg on the path to building the world’s dominant social-media empire — was reportedly, of all things, a formula for ranking chess players: the Elo system.

Fundamentally, what an Elo rating does is predict the outcome of chess matches by assigning every player a number that fluctuates based purely on performance. If you beat a slightly higher-ranked player, your rating goes up a little, but if you beat a much higher-ranked player, your rating goes up a lot (and theirs, conversely, goes down a lot). The higher the rating, the more matches you should win. That is what Elo was designed for, at least. FaceMash and Zuckerberg aside, people have deployed Elo ratings for many sports — soccer, football, basketball — and for domains as varied as dating, finance, and primatology. If something can be turned into a competition, it has probably been Elo-ed. Somehow, a simple chess algorithm has become an all-purpose tool for rating everything. In other words, when it comes to the preferred way to rate things, Elo ratings have the highest Elo rating. […]

Elo ratings don’t inherently have anything to do with chess. They’re based on a simple mathematical formula that works just as well for any one-on-one, zero-sum competition — which is to say, pretty much all sports. In 1997, a statistician named Bob Runyan adapted the formula to rank national soccer teams — a project so successful that FIFA eventually adopted an Elo system for its official rankings. Not long after, the statistician Jeff Sagarin applied Elo to rank NFL teams outside their official league standings. Things really took off when the new ESPN-owned version of Nate Silver’s 538 launched in 2014 and began making Elo ratings for many different sports. Some sports proved trickier than others. NBA basketball in particular exposed some of the system’s shortcomings, Neil Paine, a stats-focused sportswriter who used to work at 538, told me. It consistently underrated heavyweight teams, for example, in large part because it struggled to account for the meaninglessness of much of the regular season and the fact that either team might not be trying all that hard to win a given game. The system assumed uniform motivation across every team and every game. Pretty much anything, it turns out, can be framed as a one-on-one, zero-sum game.
Arpad Emmerich Elo, creator of the Elo rating system, understood the limitations of his invention. “It is a measuring tool, not a device of reward or punishment,” he once remarked. “It is a means to compare performances, assess relative strength, not a carrot waved before a rabbit, or a piece of candy given to a child for good behavior.”

113

Taser Company Axon Is Selling AI That Turns Body Cam Audio Into Police Reports

Axon on Tuesday announced a new tool called Draft One that uses artificial intelligence built on OpenAI’s GPT-4 Turbo model to transcribe audio from body cameras and automatically turn it into a police report. Axon CEO Rick Smith told Forbes that police officers will then be able to review the document to ensure accuracy. From the report:
Axon claims one early tester of the tool, Fort Collins Colorado Police Department, has seen an 82% decrease in time spent writing reports. “If an officer spends half their day reporting, and we can cut that in half, we have an opportunity to potentially free up 25% of an officer’s time to be back out policing,” Smith said. These reports, though, are often used as evidence in criminal trials, and critics are concerned that relying on AI could put people at risk by depending on language models that are known to “hallucinate,” or make things up, as well as display racial bias, either blatantly or unconsciously.

“It’s kind of a nightmare,” said Dave Maass, surveillance technologies investigations director at the Electronic Frontier Foundation. “Police, who aren’t specialists in AI, and aren’t going to be specialists in recognizing the problems with AI, are going to use these systems to generate language that could affect millions of people in their involvement with the criminal justice system. What could go wrong?” Smith acknowledged there are dangers. “When people talk about bias in AI, it really is: Is this going to exacerbate racism by taking training data that’s going to treat people differently?” he told Forbes. “That was the main risk.”

Smith said Axon is recommending police don’t use the AI to write reports for incidents as serious as a police shooting, where vital information could be missed. “An officer-involved shooting is likely a scenario where it would not be used, and I’d probably advise people against it, just because there’s so much complexity, the stakes are so high.” He said some early customers are only using Draft One for misdemeanors, though others are writing up “more significant incidents,” including use-of-force cases. Axon, however, won’t have control over how individual police departments use the tools.

130

Can I phone a friend? How cops circumvent face recognition bans

Police in multiple major US cities have figured out a trick to circumvent their cities’ bans on facial recognition technology. Just ask a friend in a city without any such restrictions to do it for you.

It’s not immediately clear how widespread such side-stepping of facial recognition restrictions in the US may be. According to the Washington Post’s review of police documents in Austin, Texas, and San Francisco, however, it’s definitely something that’s gone on in at least two major US metropolitan areas.

WaPo reported over the weekend that SFPD and APD have both sought help from neighboring cop shops on occasion, though with different levels of success. San Francisco cops have reportedly sought facial recognition searches from other agencies on five occasions since 2019, but never got a match back; Austin police, on the other hand, have sent at least 13 searches since 2020.

“Some” of the searches from APD returned hits, and multiple suspects in Austin have been arrested and charged as a result of the searches, WaPo said.

118

AI Will Suck Up 500% More Power in UK in 10 Years, Grid CEO Says

Electricity demand from UK data centers will jump sixfold over the next 10 years as a boom in AI requires increased computing power, according to the head of National Grid.

That will ramp up pressure on the country’s electricity network, which must move vast quantities of renewable energy from as far away as Scottish wind farms to data centers around London. And it’s a grid already under strain from the accelerating electrification of home heating, transportation and industries.

“Future growth in foundational technologies like artificial intelligence and quantum computing will mean larger-scale, energy-intensive computing infrastructure,” National Grid Chief Executive Officer John Pettigrew said Tuesday at a conference in Oxford. It’s an outlook replicated in many other countries, which are grappling with how to fund the massive spending required to expand capacity. Global electricity demand from data centers, AI and cryptocurrencies may more than double over the next three years, according to International Energy Agency forecasts.

122

Screen Time Robs Average Toddler of Hearing 1,000 Words Spoken By Adult a Day, Study Finds

The average toddler is missing out on hearing more than 1,000 words spoken by an adult each day due to screen time, setting back their language skills, a first-of-its kind study has found. The research, published on Tuesday in the Journal of the American Medical Association (Jama) Pediatrics, tracked 220 Australian families over two years to measure the relationship between family screen use and children’s language environment. Families recorded all the audio around their child using advanced speech recognition technology over a 16-hour period on an average day at home. They repeated this process every six months between the ages of 12 and 36 months. The lead researcher, Dr Mary Brushe from the Telethon Kids Institute, said: “The technology we use is essentially like a Fitbit, but instead of counting the number of steps, this device counts the number of words spoken by, to and around the child.” The device also picked up electronic noise, which the researchers analyzed to calculate screen time.

The researchers found young children’s exposure to screens including TVs and phones was interfering with their language opportunities, with the association most pronounced at three years of age. For every extra minute of screen time, the three-year-olds in the study were hearing seven fewer words, speaking five fewer words themselves and engaging in one less conversation. The study found the average three-year-old in the study was exposed to two hours and 52 minutes of screen time a day. Researchers estimated this led to those children being exposed to 1,139 fewer adult words, 843 fewer child words and 194 fewer conversations. Because the study couldn’t capture parents’ silent phone use, including reading emails, texting or quietly scrolling through websites or social media, Brushe said they might have underestimated how much screen usage is affecting children.

A language-rich home environment was critical in supporting infants and toddlers’ language development, Brushe said. While some educational children’s shows were designed to help children’s language skills, very young kids in the age group of the study could struggle to translate television shows into their own life, she said. This study did not differentiate between whether children were watching high- or low-quality screen content.

80

The FBI Is Using Push Notifications To Track Criminals

The Post did a little digging into court records and found evidence of at least 130 search warrants filed by the feds for push notification data in cases spanning 14 states. In those cases, FBI officials asked tech companies like Google, Apple, and Facebook to fork over data related to a suspect’s mobile notifications, then used the data to implicate the suspect in criminal behavior linked to a particular app, even though many of those apps were supposedly anonymous communication platforms, like Wickr.

How exactly is this possible? Push notifications, which are provided by a mobile operating system provider, include embedded metadata that can be examined to understand the use of the mobile apps on a particular phone. Apps come laced with a quiet identifier, a “push token,” which is stored on the corporate servers of a company like Apple or another phone manufacturer after a user signs up to use a particular app. Those tokens can later be used to identify the person using the app, based on the information associated with the device on which the app was downloaded. Even turning off push notifications on your device doesn’t necessarily disable this feature, experts contend. […]

If finding new ways to catch pedophiles and terrorists doesn’t seem like the worst thing in the world, the Post article highlights the voices of critics who fear that this kind of mobile data could be used to track people who have not committed serious crimes — like political activists or women seeking abortions in states where the procedure has been restricted.

134

How the Pentagon Learned To Use Targeted Ads To Find Its Targets

In 2019, a government contractor and technologist named Mike Yeagley began making the rounds in Washington, DC. He had a blunt warning for anyone in the country’s national security establishment who would listen: The US government had a Grindr problem. A popular dating and hookup app, Grindr relied on the GPS capabilities of modern smartphones to connect potential partners in the same city, neighborhood, or even building. The app can show how far away a potential partner is in real time, down to the foot. But to Yeagley, Grindr was something else: one of the tens of thousands of carelessly designed mobile phone apps that leaked massive amounts of data into the opaque world of online advertisers. That data, Yeagley knew, was easily accessible by anyone with a little technical know-how. So Yeagley — a technology consultant then in his late forties who had worked in and around government projects nearly his entire career — made a PowerPoint presentation and went out to demonstrate precisely how that data was a serious national security risk.

As he would explain in a succession of bland government conference rooms, Yeagley was able to access the geolocation data on Grindr users through a hidden but ubiquitous entry point: the digital advertising exchanges that serve up the little digital banner ads along the top of Grindr and nearly every other ad-supported mobile app and website. This was possible because of the way online ad space is sold, through near-instantaneous auctions in a process called real-time bidding. Those auctions were rife with surveillance potential. You know that ad that seems to follow you around the internet? It’s tracking you in more ways than one. In some cases, it’s making your precise location available in near-real time to both advertisers and people like Mike Yeagley, who specialized in obtaining unique data sets for government agencies.

Working with Grindr data, Yeagley began drawing geofences — creating virtual boundaries in geographical data sets — around buildings belonging to government agencies that do national security work. That allowed Yeagley to see what phones were in certain buildings at certain times, and where they went afterwards. He was looking for phones belonging to Grindr users who spent their daytime hours at government office buildings. If the device spent most workdays at the Pentagon, the FBI headquarters, or the National Geospatial-Intelligence Agency building at Fort Belvoir, for example, there was a good chance its owner worked for one of those agencies. Then he started looking at the movement of those phones through the Grindr data. When they weren’t at their offices, where did they go? A small number of them had lingered at highway rest stops in the DC area at the same time and in proximity to other Grindr users — sometimes during the workday and sometimes while in transit between government facilities. For other Grindr users, he could infer where they lived, see where they traveled, even guess at whom they were dating.

Intelligence agencies have a long and unfortunate history of trying to root out LGBTQ Americans from their workforce, but this wasn’t Yeagley’s intent. He didn’t want anyone to get in trouble. No disciplinary actions were taken against any employee of the federal government based on Yeagley’s presentation. His aim was to show that buried in the seemingly innocuous technical data that comes off every cell phone in the world is a rich story — one that people might prefer to keep quiet. Or at the very least, not broadcast to the whole world. And that each of these intelligence and national security agencies had employees who were recklessly, if obliviously, broadcasting intimate details of their lives to anyone who knew where to look. As Yeagley showed, all that information was available for sale, for cheap. And it wasn’t just Grindr, but rather any app that had access to a user’s precise location — other dating apps, weather apps, games. Yeagley chose Grindr because it happened to generate a particularly rich set of data and its user base might be uniquely vulnerable.
The report goes into great detail about how intelligence and data analysis techniques, notably through a program called Locomotive developed by PlanetRisk, enabled the tracking of mobile devices associated with Russian President Vladimir Putin’s entourage. By analyzing commercial adtech data, including precise geolocation information collected from mobile advertising bid requests, analysts were able to monitor the movements of phones that frequently accompanied Putin, indicating the locations and movements of his security personnel, aides, and support staff.

This capability underscored the surveillance potential of commercially available data, providing insights into the activities and security arrangements of high-profile individuals without directly compromising their personal devices.

120

How AI is Taking Water From the Desert

Microsoft built two datacenters west of Phoenix, with plans for seven more (serving, among other companies, OpenAI). “Microsoft has been adding data centers at a stupendous rate, spending more than $10 billion on cloud-computing capacity in every quarter of late,” writes the Atlantic. “One semiconductor analyst called this “the largest infrastructure buildout that humanity has ever seen.”

But is this part of a concerning trend?
Microsoft plans to absorb its excess heat with a steady flow of air and, as needed, evaporated drinking water. Use of the latter is projected to reach more than 50 million gallons every year. That might be a burden in the best of times. As of 2023, it seemed absurd. Phoenix had just endured its hottest summer ever, with 55 days of temperatures above 110 degrees. The weather strained electrical grids and compounded the effects of the worst drought the region has faced in more than a millennium. The Colorado River, which provides drinking water and hydropower throughout the region, has been dwindling. Farmers have already had to fallow fields, and a community on the eastern outskirts of Phoenix went without tap water for most of the year… [T]here were dozens of other facilities I could visit in the area, including those run by Apple, Amazon, Meta, and, soon, Google. Not too far from California, and with plenty of cheap land, Greater Phoenix is among the fastest-growing hubs in the U.S. for data centers….

Microsoft, the biggest tech firm on the planet, has made ambitious plans to tackle climate change. In 2020, it pledged to be carbon-negative (removing more carbon than it emits each year) and water-positive (replenishing more clean water than it consumes) by the end of the decade. But the company also made an all-encompassing commitment to OpenAI, the most important maker of large-scale AI models. In so doing, it helped kick off a global race to build and deploy one of the world’s most resource-intensive digital technologies. Microsoft operates more than 300 data centers around the world, and in 2021 declared itself “on pace to build between 50 and 100 new datacenters each year for the foreseeable future….”

Researchers at UC Riverside estimated last year… that global AI demand could cause data centers to suck up 1.1 trillion to 1.7 trillion gallons of freshwater by 2027. A separate study from a university in the Netherlands, this one peer-reviewed, found that AI servers’ electricity demand could grow, over the same period, to be on the order of 100 terawatt hours per year, about as much as the entire annual consumption of Argentina or Sweden… [T]ensions over data centers’ water use are cropping up not just in Arizona but also in Oregon, Uruguay, and England, among other places in the world.

The article points out that Microsoft “is transitioning some data centers, including those in Arizona, to designs that use less or no water, cooling themselves instead with giant fans.” And an analysis (commissioned by Microsoft) on the impact of one building said it would use about 56 million gallons of drinking water each year, equivalent to the amount used by 670 families, according to the article. “In other words, a campus of servers pumping out ChatGPT replies from the Arizona desert is not about to make anyone go thirsty.”

154

Public trust in AI is sinking across the board

Globally, trust in AI companies has dropped to 53%, down from 61% five years ago. In the U.S., trust has dropped 15 percentage points (from 50% to 35%) over the same period. Trust in AI is low across political lines. Democrats trust in AI companies is 38%, independents are at 25% and Republicans at 24%. Tech is losing its lead as the most trusted sector. Eight years ago, technology was the leading industry in trust in 90% of the countries Edelman studies. Today, it is the most trusted in only half of countries.

People in developing countries are more likely to embrace AI than those in developed ones. Respondents in France, Canada, Ireland, UK, U.S., Germany, Australia, the Netherlands and Sweden reject the growing use of AI by a three-to-one margin, Edelman said. By contrast, acceptance outpaces resistance by a wide margin in developing markets such as Saudi Arabia, India, China, Kenya, Nigeria and Thailand.

142

Modern Workplace Tech Linked To Lower Employee Well-Being, Study Finds

According to a new study from the Institute for the Future of Work, contemporary technology often has a negative impact on workers’ quality of life. The think tank surveyed over 6,000 people to learn how four categories of workplace technologies affected their wellbeing. TechSpot reports the findings:

The study found that increased exposure to three of the categories tended to worsen workers’ mental state and health. The three areas that negatively impact people most are wearable and remote sensing technologies, which covers CCTV cameras and wearable trackers; robotics, consisting of automated machines, self-driving vehicles, and other equipment; and, unsurprisingly, technologies relating to AI and ML, which includes everything from decision management to biometrics. Only one of the categories was found to be beneficial to employees, and it’s one that has been around for decades: ICT tech such as laptops, tablets, phones, and real-time messaging tools.

124

Undersea Cable Damage Causes Internet Outages Across Africa

The West Africa Cable System, MainOne and ACE sea cables — arteries for telecommunications data — were all affected on Thursday, triggering outages and connectivity issues for mobile operators and internet service providers, according to data from internet analysis firms including NetBlocks, Kentik and Cloudflare. The cause of the cable faults has not yet been determined.

Data show a major disruption to connectivity in eight West African countries, with Ivory Coast, Liberia and Benin being the most affected, NetBlocks, an internet watchdog, said in a post on X. Ghana, Nigeria, and Cameroon are among other countries impacted. Several companies have also reported service disruptions in South Africa. “This is a devastating blow to internet connectivity along the west coast of Africa, which will be operating in a degraded state for weeks to come,” said Doug Madory, director of internet analysis firm Kentik. The cable faults off the Ivory Coast come less than a month after three telecommunications cables were severed in the Red Sea, highlighting the vulnerability of critical communications infrastructure.

98

Amazon Tells Warehouse Workers To Close Their Eyes and Think Happy Thoughts

Amazon is telling workers to close their eyes and dream of being somewhere else while they’re standing in a warehouse.

A worker in one of Amazon’s fulfillment centers, who we’ve granted anonymity, sent 404 Media a photo they took of a screen imploring them to try “savoring” the idea of something that makes them happy — as in, not being at work, surrounded by robots and packages. “Savoring,” the screen says, in a black font over a green block of color. “Close your eyes and think about something that makes you happy.” Under that text — which I can’t emphasize enough: it looks like something a 6th grader would make in Powerpoint — there’s a bunch of white space, and a stock illustration of a faceless person in an Amazon vest. He’s being urged on by an anthropomorphic stack of Amazon packages with wheels and arms. There’s also a countdown timer that says “repeat until timer ends.” In the image we saw, it said 10 seconds.

116

CIA covert influence operation against China

Donald Trump authorized the Central Intelligence Agency to launch a clandestine campaign on Chinese social media aimed at turning public opinion in China against its government, according to former U.S. officials with direct knowledge of the highly classified operation.

Three former officials told Reuters that the CIA created a small team of operatives who used bogus internet identities to spread negative narratives about Xi Jinping’s government while leaking disparaging intelligence to overseas news outlets. The effort, which began in 2019, has not been previously reported.

Although the U.S. officials declined to provide specific details of these operations, they said the disparaging narratives were based in fact despite being secretly released by intelligence operatives under false cover. The efforts within China were intended to foment paranoia among top leaders there, forcing its government to expend resources chasing intrusions into Beijing’s tightly controlled internet, two former officials said. “We wanted them chasing ghosts,” one of these former officials said.

Reuters was unable to determine the impact of the secret operations or whether the administration of President Joe Biden has maintained the CIA program.

Covert messaging allows the United States to implant ideas in countries where censorship might prevent that information from coming to light, or in areas where audiences wouldn’t give much credence to U.S. government statements, said Loch Johnson, a University of Georgia political scientist who studies the use of such tactics.

Covert propaganda campaigns were common during the Cold War, when the CIA planted 80 to 90 articles a day in an effort to undermine the Soviet Union, Johnson said. In the 1950s, for example, the CIA created an astrological magazine in East Germany to publish foreboding predictions about communist leaders, according to declassified records.

The covert propaganda campaign against Beijing could backfire, said Heer, the former CIA analyst. China could use evidence of a CIA influence program to bolster its decades-old accusations of shadowy Western subversion, helping Beijing “proselytize” in a developing world already deeply suspicious of Washington.

122

OpenAI’s Chatbot Store is Filling Up With Spam

When OpenAI CEO Sam Altman announced GPTs, custom chatbots powered by OpenAI’s generative AI models, onstage at the company’s first-ever developer conference in November, he described them as a way to “accomplish all sorts of tasks” — from programming to learning about esoteric scientific subjects to getting workout pointers. “Because [GPTs] combine instructions, expanded knowledge and actions, they can be more helpful to you,” Altman said. “You can build a GPT … for almost anything.” He wasn’t kidding about the anything part.

TechCrunch found that the GPT Store, OpenAI’s official marketplace for GPTs, is flooded with bizarre, potentially copyright-infringing GPTs that imply a light touch where it concerns OpenAI’s moderation efforts. A cursory search pulls up GPTs that purport to generate art in the style of Disney and Marvel properties, serve as little more than funnels to third-party paid services, advertise themselves as being able to bypass AI content detection tools such as Turnitin and Copyleaks.

123

Academics warn of facial recognition technology’s ‘chilling effects’ and potential social harm

Concerns about the potential use of facial recognition technology (FRT) by An Garda Síochána have been raised by a number of specialist academics who warned of its “chilling effects” and potential social harms.

An Oireachtas committee hearing expert insights on the technology was also told of likely legal challenges should its use be adopted in the context of current draft legislation, and of the importance of judicial, rather than police, oversight.

“If Ireland goes ahead with this technology it’s just a matter of time before we become a headline,” said Dr Abeba Birhane of the School of Computer Science and Statistics at Trinity College Dublin, who warned of its tendency to err along minority group lines, a common criticism of FRT systems.

Dr Daragh Murray, senior lecturer at the School of Law in Queen Mary University of London, sought to highlight the potential “chilling effects”.

“Facial recognition represents a step-change in police surveillance capability,” he said, telling the committee he advocated for a moratorium on its use.

“Both live and retrospective facial recognition makes possible the ability to monitor, track and profile large amounts of the population with significant private life implications.”

The extent of proposed FRT use by An Garda Síochána was played down by its chief information officer Andrew O’Sullivan who told the committee it would be retrospective, and with human involvement, rather than autonomous, machine-led results in real time.

“We have absolutely no intention of doing real time facial recognition; we also have no time at all for autonomous machine decision making,” he said. “Every decision [during investigations] will be made by a person.”

131

The Rise of Techno-authoritarianism

In the behavior of tech companies, the Atlantic’s executive editor warns us about “a clear and coherent ideology that is seldom called out for what it is: authoritarian technocracy. As the most powerful companies in Silicon Valley have matured, this ideology has only grown stronger, more self-righteous, more delusional, and — in the face of rising criticism — more aggrieved.”

The new technocrats are ostentatious in their use of language that appeals to Enlightenment values — reason, progress, freedom — but in fact they are leading an antidemocratic, illiberal movement. Many of them profess unconditional support for free speech, but are vindictive toward those who say things that do not flatter them. They tend to hold eccentric beliefs…. above all, that their power should be unconstrained. The systems they’ve built or are building — to rewire communications, remake human social networks, insinuate artificial intelligence into daily life, and more — impose these beliefs on the population, which is neither consulted nor, usually, meaningfully informed. All this, and they still attempt to perpetuate the absurd myth that they are the swashbuckling underdogs.

The article calls out Marc Andreessen’s Techno-Optimist Manifesto for saying “We believe in adventure… rebelling against the status quo, mapping uncharted territory, conquering dragons, and bringing home the spoils for our community…” (The Atlantic concludes Andreessen’s position “serves only to absolve him and the other Silicon Valley giants of any moral or civic duty to do anything but make new things that will enrich them, without consideration of the social costs, or of history.”)

The article notes that Andreessen “also identifies a list of enemies and ‘zombie ideas’ that he calls upon his followers to defeat, among them ‘institutions’ and ‘tradition.'” But the Atlantic makes a broader critique not just of Andreessen but of other Silicon Valley elites. “The world that they have brought into being over the past two decades is unquestionably a world of reckless social engineering, without consequence for its architects, who foist their own abstract theories and luxury beliefs on all of us…”

None of this happens without the underlying technocratic philosophy of inevitability — that is, the idea that if you can build something new, you must. “In a properly functioning world, I think this should be a project of governments,” [Sam] Altman told my colleague Ross Andersen last year, referring to OpenAI’s attempts to develop artificial general intelligence. But Altman was going to keep building it himself anyway. Or, as Zuckerberg put it to The New Yorker many years ago: “Isn’t it, like, inevitable that there would be a huge social network of people? … If we didn’t do this someone else would have done it.”

The article includes this damning chat log from a 2004 conversation Zuckerberg had with a friend:

Zuckerberg: If you ever need info about anyone at Harvard.
Zuckerberg: Just ask.
Zuckerberg: I have over 4,000 emails, pictures, addresses, SNS
Friend: What? How’d you manage that one?
Zuckerberg: People just submitted it.
Zuckerberg: I don’t know why.
Zuckerberg: They “trust me”
Zuckerberg: Dumb fucks.’

But the article also reminds us that in Facebook’s early days, “Zuckerberg listed ‘revolutions’ among his interests.”

The main dangers of authoritarian technocracy are not at this point political, at least not in the traditional sense. Still, a select few already have authoritarian control, more or less, to establish the digital world’s rules and cultural norms, which can be as potent as political power…

[I]n recent years, it has become clear that regulation is needed, not least because the rise of technocracy proves that Silicon Valley’s leaders simply will not act in the public’s best interest. Much should be done to protect children from the hazards of social media, and to break up monopolies and oligopolies that damage society, and more. At the same time, I believe that regulation alone will not be enough to meaningfully address the cultural rot that the new technocrats are spreading…. We do not have to live in the world the new technocrats are designing for us. We do not have to acquiesce to their growing project of dehumanization and data mining. Each of us has agency.

No more “build it because we can.” No more algorithmic feedbags. No more infrastructure designed to make the people less powerful and the powerful more controlling. Every day we vote with our attention; it is precious, and desperately wanted by those who will use it against us for their own profit and political goals. Don’t let them.

The article specifically recommends “challenging existing norms about the use of apps and YouTube in classrooms, the ubiquity of smartphones in adolescent hands, and widespread disregard for individual privacy. People who believe that we all deserve better will need to step up to lead such efforts.”
“Universities should reclaim their proper standing as leaders in developing world-changing technologies for the good of humankind. (Harvard, Stanford, and MIT could invest in creating a consortium for such an effort — their endowments are worth roughly $110 billion combined.)”

161

Over 2 percent of the US’s electricity generation now goes to Bitcoin

While its analysis is preliminary, the Energy Information Agency (EIA) estimates that large-scale cryptocurrency operations are now consuming over 2 percent of the US’s electricity. That’s roughly the equivalent of having added an additional state to the grid over just the last three years.

128

Almost Half of British Teens Feel Addicted To Social Media, Study Says

The latest research, by Dr Amy Orben’s team at the University of Cambridge, used data from the Millennium Cohort study which is tracking the lives of about 19,000 people born in 2000-2002 across England, Scotland, Wales and Northern Ireland. When the cohort were aged 16-18 they were asked, for the first time, about social media use. Of the 7,000 people who responded, 48% said they agreed or strongly agreed with the statement “I think I am addicted to social media.” A higher proportion of girls (57%) agreed compared to boys (37%), according to the data shared with the Guardian.

Scientists said this did not mean that these people are actually suffering from a clinical addiction, but that expressing a perceived lack of control suggests a problematic relationship. “We’re not saying the people who say they feel addicted are addicted,” said Georgia Turner, a graduate student leading the analysis. “Self-perceived social media addiction is not [necessarily] the same as drug addiction. But it’s not a nice feeling to feel you don’t have agency over your own behavior. It’s quite striking that so many people feel like that and it can’t it be that good.”

“Social media research has largely assumed that [so-called] social media addiction is going to follow the same framework as drug addiction,” said Turner. Orben’s team and others argue that this is likely to be oversimplistic and are investigating whether the teenagers cluster into groups whose behavioral can be predicted by other personality traits. It could be that, for some, their relationship is akin to a behavioral addiction, but for others their use could be driven by compulsive checking, others may be relying on it to cope with negative life experiences, and others may simply be responding to negative social perceptions about “wasting time” on social media.

186

Robot Crushes Man To Death After Misidentifying Him As a Box

A robot in a South Korea distribution center crushed a man to death after the machine apparently failed to differentiate him from the boxes of produce it was handling. The Guardian reports:
The man, a robotics company worker in his 40s, was inspecting the robot’s sensor operations at a distribution centre for agricultural produce in South Gyeongsang province. The industrial robot, which was lifting boxes filled with bell peppers and placing them on a pallet, appears to have malfunctioned and identified the man as a box, Yonhap reported, citing the police. The robotic arm pushed the man’s upper body down against the conveyor belt, crushing his face and chest, according to Yonhap. He was transferred to the hospital but died later, the report said.
The BBC notes that the man was “checking the robot’s sensor operations ahead of its test run […] scheduled for November 8.” It was originally planned for November 6th, “but was pushed back by two days due to problems with the robot’s sensor,” the report adds.

104

Court Rules Automakers Can Record and Intercept Owner Text Messages

A federal judge on Tuesday refused to bring back a class action lawsuit alleging four auto manufacturers had violated Washington state’s privacy laws by using vehicles’ on-board infotainment systems to record and intercept customers’ private text messages and mobile phone call logs. The Seattle-based appellate judge ruled that the practice does not meet the threshold for an illegal privacy violation under state law, handing a big win to automakers Honda, Toyota, Volkswagen and General Motors, which are defendants in five related class action suits focused on the issue. One of those cases, against Ford, had been dismissed on appeal previously.

The plaintiffs in the four live cases had appealed a prior judge’s dismissal. But the appellate judge ruled Tuesday that the interception and recording of mobile phone activity did not meet the Washington Privacy Act’s standard that a plaintiff must prove that “his or her business, his or her person, or his or her reputation” has been threatened. In an example of the issues at stake, plaintiffs in one of the five cases filed suit against Honda in 2021, arguing that beginning in at least 2014 infotainment systems in the company’s vehicles began downloading and storing a copy of all text messages on smartphones when they were connected to the system. An Annapolis, Maryland-based company, Berla Corporation, provides the technology to some car manufacturers but does not offer it to the general public, the lawsuit said. Once messages are downloaded, Berla’s software makes it impossible for vehicle owners to access their communications and call logs but does provide law enforcement with access, the lawsuit said.

151