Resources

AI Will Suck Up 500% More Power in UK in 10 Years, Grid CEO Says

Electricity demand from UK data centers will jump sixfold over the next 10 years as a boom in AI requires increased computing power, according to the head of National Grid.

That will ramp up pressure on the country’s electricity network, which must move vast quantities of renewable energy from as far away as Scottish wind farms to data centers around London. And it’s a grid already under strain from the accelerating electrification of home heating, transportation and industries.

“Future growth in foundational technologies like artificial intelligence and quantum computing will mean larger-scale, energy-intensive computing infrastructure,” National Grid Chief Executive Officer John Pettigrew said Tuesday at a conference in Oxford. It’s an outlook replicated in many other countries, which are grappling with how to fund the massive spending required to expand capacity. Global electricity demand from data centers, AI and cryptocurrencies may more than double over the next three years, according to International Energy Agency forecasts.

19

Screen Time Robs Average Toddler of Hearing 1,000 Words Spoken By Adult a Day, Study Finds

The average toddler is missing out on hearing more than 1,000 words spoken by an adult each day due to screen time, setting back their language skills, a first-of-its kind study has found. The research, published on Tuesday in the Journal of the American Medical Association (Jama) Pediatrics, tracked 220 Australian families over two years to measure the relationship between family screen use and children’s language environment. Families recorded all the audio around their child using advanced speech recognition technology over a 16-hour period on an average day at home. They repeated this process every six months between the ages of 12 and 36 months. The lead researcher, Dr Mary Brushe from the Telethon Kids Institute, said: “The technology we use is essentially like a Fitbit, but instead of counting the number of steps, this device counts the number of words spoken by, to and around the child.” The device also picked up electronic noise, which the researchers analyzed to calculate screen time.

The researchers found young children’s exposure to screens including TVs and phones was interfering with their language opportunities, with the association most pronounced at three years of age. For every extra minute of screen time, the three-year-olds in the study were hearing seven fewer words, speaking five fewer words themselves and engaging in one less conversation. The study found the average three-year-old in the study was exposed to two hours and 52 minutes of screen time a day. Researchers estimated this led to those children being exposed to 1,139 fewer adult words, 843 fewer child words and 194 fewer conversations. Because the study couldn’t capture parents’ silent phone use, including reading emails, texting or quietly scrolling through websites or social media, Brushe said they might have underestimated how much screen usage is affecting children.

A language-rich home environment was critical in supporting infants and toddlers’ language development, Brushe said. While some educational children’s shows were designed to help children’s language skills, very young kids in the age group of the study could struggle to translate television shows into their own life, she said. This study did not differentiate between whether children were watching high- or low-quality screen content.

18

The FBI Is Using Push Notifications To Track Criminals

The Post did a little digging into court records and found evidence of at least 130 search warrants filed by the feds for push notification data in cases spanning 14 states. In those cases, FBI officials asked tech companies like Google, Apple, and Facebook to fork over data related to a suspect’s mobile notifications, then used the data to implicate the suspect in criminal behavior linked to a particular app, even though many of those apps were supposedly anonymous communication platforms, like Wickr.

How exactly is this possible? Push notifications, which are provided by a mobile operating system provider, include embedded metadata that can be examined to understand the use of the mobile apps on a particular phone. Apps come laced with a quiet identifier, a “push token,” which is stored on the corporate servers of a company like Apple or another phone manufacturer after a user signs up to use a particular app. Those tokens can later be used to identify the person using the app, based on the information associated with the device on which the app was downloaded. Even turning off push notifications on your device doesn’t necessarily disable this feature, experts contend. […]

If finding new ways to catch pedophiles and terrorists doesn’t seem like the worst thing in the world, the Post article highlights the voices of critics who fear that this kind of mobile data could be used to track people who have not committed serious crimes — like political activists or women seeking abortions in states where the procedure has been restricted.

17

How the Pentagon Learned To Use Targeted Ads To Find Its Targets

In 2019, a government contractor and technologist named Mike Yeagley began making the rounds in Washington, DC. He had a blunt warning for anyone in the country’s national security establishment who would listen: The US government had a Grindr problem. A popular dating and hookup app, Grindr relied on the GPS capabilities of modern smartphones to connect potential partners in the same city, neighborhood, or even building. The app can show how far away a potential partner is in real time, down to the foot. But to Yeagley, Grindr was something else: one of the tens of thousands of carelessly designed mobile phone apps that leaked massive amounts of data into the opaque world of online advertisers. That data, Yeagley knew, was easily accessible by anyone with a little technical know-how. So Yeagley — a technology consultant then in his late forties who had worked in and around government projects nearly his entire career — made a PowerPoint presentation and went out to demonstrate precisely how that data was a serious national security risk.

As he would explain in a succession of bland government conference rooms, Yeagley was able to access the geolocation data on Grindr users through a hidden but ubiquitous entry point: the digital advertising exchanges that serve up the little digital banner ads along the top of Grindr and nearly every other ad-supported mobile app and website. This was possible because of the way online ad space is sold, through near-instantaneous auctions in a process called real-time bidding. Those auctions were rife with surveillance potential. You know that ad that seems to follow you around the internet? It’s tracking you in more ways than one. In some cases, it’s making your precise location available in near-real time to both advertisers and people like Mike Yeagley, who specialized in obtaining unique data sets for government agencies.

Working with Grindr data, Yeagley began drawing geofences — creating virtual boundaries in geographical data sets — around buildings belonging to government agencies that do national security work. That allowed Yeagley to see what phones were in certain buildings at certain times, and where they went afterwards. He was looking for phones belonging to Grindr users who spent their daytime hours at government office buildings. If the device spent most workdays at the Pentagon, the FBI headquarters, or the National Geospatial-Intelligence Agency building at Fort Belvoir, for example, there was a good chance its owner worked for one of those agencies. Then he started looking at the movement of those phones through the Grindr data. When they weren’t at their offices, where did they go? A small number of them had lingered at highway rest stops in the DC area at the same time and in proximity to other Grindr users — sometimes during the workday and sometimes while in transit between government facilities. For other Grindr users, he could infer where they lived, see where they traveled, even guess at whom they were dating.

Intelligence agencies have a long and unfortunate history of trying to root out LGBTQ Americans from their workforce, but this wasn’t Yeagley’s intent. He didn’t want anyone to get in trouble. No disciplinary actions were taken against any employee of the federal government based on Yeagley’s presentation. His aim was to show that buried in the seemingly innocuous technical data that comes off every cell phone in the world is a rich story — one that people might prefer to keep quiet. Or at the very least, not broadcast to the whole world. And that each of these intelligence and national security agencies had employees who were recklessly, if obliviously, broadcasting intimate details of their lives to anyone who knew where to look. As Yeagley showed, all that information was available for sale, for cheap. And it wasn’t just Grindr, but rather any app that had access to a user’s precise location — other dating apps, weather apps, games. Yeagley chose Grindr because it happened to generate a particularly rich set of data and its user base might be uniquely vulnerable.
The report goes into great detail about how intelligence and data analysis techniques, notably through a program called Locomotive developed by PlanetRisk, enabled the tracking of mobile devices associated with Russian President Vladimir Putin’s entourage. By analyzing commercial adtech data, including precise geolocation information collected from mobile advertising bid requests, analysts were able to monitor the movements of phones that frequently accompanied Putin, indicating the locations and movements of his security personnel, aides, and support staff.

This capability underscored the surveillance potential of commercially available data, providing insights into the activities and security arrangements of high-profile individuals without directly compromising their personal devices.

15

How AI is Taking Water From the Desert

Microsoft built two datacenters west of Phoenix, with plans for seven more (serving, among other companies, OpenAI). “Microsoft has been adding data centers at a stupendous rate, spending more than $10 billion on cloud-computing capacity in every quarter of late,” writes the Atlantic. “One semiconductor analyst called this “the largest infrastructure buildout that humanity has ever seen.”

But is this part of a concerning trend?
Microsoft plans to absorb its excess heat with a steady flow of air and, as needed, evaporated drinking water. Use of the latter is projected to reach more than 50 million gallons every year. That might be a burden in the best of times. As of 2023, it seemed absurd. Phoenix had just endured its hottest summer ever, with 55 days of temperatures above 110 degrees. The weather strained electrical grids and compounded the effects of the worst drought the region has faced in more than a millennium. The Colorado River, which provides drinking water and hydropower throughout the region, has been dwindling. Farmers have already had to fallow fields, and a community on the eastern outskirts of Phoenix went without tap water for most of the year… [T]here were dozens of other facilities I could visit in the area, including those run by Apple, Amazon, Meta, and, soon, Google. Not too far from California, and with plenty of cheap land, Greater Phoenix is among the fastest-growing hubs in the U.S. for data centers….

Microsoft, the biggest tech firm on the planet, has made ambitious plans to tackle climate change. In 2020, it pledged to be carbon-negative (removing more carbon than it emits each year) and water-positive (replenishing more clean water than it consumes) by the end of the decade. But the company also made an all-encompassing commitment to OpenAI, the most important maker of large-scale AI models. In so doing, it helped kick off a global race to build and deploy one of the world’s most resource-intensive digital technologies. Microsoft operates more than 300 data centers around the world, and in 2021 declared itself “on pace to build between 50 and 100 new datacenters each year for the foreseeable future….”

Researchers at UC Riverside estimated last year… that global AI demand could cause data centers to suck up 1.1 trillion to 1.7 trillion gallons of freshwater by 2027. A separate study from a university in the Netherlands, this one peer-reviewed, found that AI servers’ electricity demand could grow, over the same period, to be on the order of 100 terawatt hours per year, about as much as the entire annual consumption of Argentina or Sweden… [T]ensions over data centers’ water use are cropping up not just in Arizona but also in Oregon, Uruguay, and England, among other places in the world.

The article points out that Microsoft “is transitioning some data centers, including those in Arizona, to designs that use less or no water, cooling themselves instead with giant fans.” And an analysis (commissioned by Microsoft) on the impact of one building said it would use about 56 million gallons of drinking water each year, equivalent to the amount used by 670 families, according to the article. “In other words, a campus of servers pumping out ChatGPT replies from the Arizona desert is not about to make anyone go thirsty.”

20

Public trust in AI is sinking across the board

Globally, trust in AI companies has dropped to 53%, down from 61% five years ago. In the U.S., trust has dropped 15 percentage points (from 50% to 35%) over the same period. Trust in AI is low across political lines. Democrats trust in AI companies is 38%, independents are at 25% and Republicans at 24%. Tech is losing its lead as the most trusted sector. Eight years ago, technology was the leading industry in trust in 90% of the countries Edelman studies. Today, it is the most trusted in only half of countries.

People in developing countries are more likely to embrace AI than those in developed ones. Respondents in France, Canada, Ireland, UK, U.S., Germany, Australia, the Netherlands and Sweden reject the growing use of AI by a three-to-one margin, Edelman said. By contrast, acceptance outpaces resistance by a wide margin in developing markets such as Saudi Arabia, India, China, Kenya, Nigeria and Thailand.

20

Modern Workplace Tech Linked To Lower Employee Well-Being, Study Finds

According to a new study from the Institute for the Future of Work, contemporary technology often has a negative impact on workers’ quality of life. The think tank surveyed over 6,000 people to learn how four categories of workplace technologies affected their wellbeing. TechSpot reports the findings:

The study found that increased exposure to three of the categories tended to worsen workers’ mental state and health. The three areas that negatively impact people most are wearable and remote sensing technologies, which covers CCTV cameras and wearable trackers; robotics, consisting of automated machines, self-driving vehicles, and other equipment; and, unsurprisingly, technologies relating to AI and ML, which includes everything from decision management to biometrics. Only one of the categories was found to be beneficial to employees, and it’s one that has been around for decades: ICT tech such as laptops, tablets, phones, and real-time messaging tools.

19

Undersea Cable Damage Causes Internet Outages Across Africa

The West Africa Cable System, MainOne and ACE sea cables — arteries for telecommunications data — were all affected on Thursday, triggering outages and connectivity issues for mobile operators and internet service providers, according to data from internet analysis firms including NetBlocks, Kentik and Cloudflare. The cause of the cable faults has not yet been determined.

Data show a major disruption to connectivity in eight West African countries, with Ivory Coast, Liberia and Benin being the most affected, NetBlocks, an internet watchdog, said in a post on X. Ghana, Nigeria, and Cameroon are among other countries impacted. Several companies have also reported service disruptions in South Africa. “This is a devastating blow to internet connectivity along the west coast of Africa, which will be operating in a degraded state for weeks to come,” said Doug Madory, director of internet analysis firm Kentik. The cable faults off the Ivory Coast come less than a month after three telecommunications cables were severed in the Red Sea, highlighting the vulnerability of critical communications infrastructure.

15

Amazon Tells Warehouse Workers To Close Their Eyes and Think Happy Thoughts

Amazon is telling workers to close their eyes and dream of being somewhere else while they’re standing in a warehouse.

A worker in one of Amazon’s fulfillment centers, who we’ve granted anonymity, sent 404 Media a photo they took of a screen imploring them to try “savoring” the idea of something that makes them happy — as in, not being at work, surrounded by robots and packages. “Savoring,” the screen says, in a black font over a green block of color. “Close your eyes and think about something that makes you happy.” Under that text — which I can’t emphasize enough: it looks like something a 6th grader would make in Powerpoint — there’s a bunch of white space, and a stock illustration of a faceless person in an Amazon vest. He’s being urged on by an anthropomorphic stack of Amazon packages with wheels and arms. There’s also a countdown timer that says “repeat until timer ends.” In the image we saw, it said 10 seconds.

19

CIA covert influence operation against China

Donald Trump authorized the Central Intelligence Agency to launch a clandestine campaign on Chinese social media aimed at turning public opinion in China against its government, according to former U.S. officials with direct knowledge of the highly classified operation.

Three former officials told Reuters that the CIA created a small team of operatives who used bogus internet identities to spread negative narratives about Xi Jinping’s government while leaking disparaging intelligence to overseas news outlets. The effort, which began in 2019, has not been previously reported.

Although the U.S. officials declined to provide specific details of these operations, they said the disparaging narratives were based in fact despite being secretly released by intelligence operatives under false cover. The efforts within China were intended to foment paranoia among top leaders there, forcing its government to expend resources chasing intrusions into Beijing’s tightly controlled internet, two former officials said. “We wanted them chasing ghosts,” one of these former officials said.

Reuters was unable to determine the impact of the secret operations or whether the administration of President Joe Biden has maintained the CIA program.

Covert messaging allows the United States to implant ideas in countries where censorship might prevent that information from coming to light, or in areas where audiences wouldn’t give much credence to U.S. government statements, said Loch Johnson, a University of Georgia political scientist who studies the use of such tactics.

Covert propaganda campaigns were common during the Cold War, when the CIA planted 80 to 90 articles a day in an effort to undermine the Soviet Union, Johnson said. In the 1950s, for example, the CIA created an astrological magazine in East Germany to publish foreboding predictions about communist leaders, according to declassified records.

The covert propaganda campaign against Beijing could backfire, said Heer, the former CIA analyst. China could use evidence of a CIA influence program to bolster its decades-old accusations of shadowy Western subversion, helping Beijing “proselytize” in a developing world already deeply suspicious of Washington.

18

OpenAI’s Chatbot Store is Filling Up With Spam

When OpenAI CEO Sam Altman announced GPTs, custom chatbots powered by OpenAI’s generative AI models, onstage at the company’s first-ever developer conference in November, he described them as a way to “accomplish all sorts of tasks” — from programming to learning about esoteric scientific subjects to getting workout pointers. “Because [GPTs] combine instructions, expanded knowledge and actions, they can be more helpful to you,” Altman said. “You can build a GPT … for almost anything.” He wasn’t kidding about the anything part.

TechCrunch found that the GPT Store, OpenAI’s official marketplace for GPTs, is flooded with bizarre, potentially copyright-infringing GPTs that imply a light touch where it concerns OpenAI’s moderation efforts. A cursory search pulls up GPTs that purport to generate art in the style of Disney and Marvel properties, serve as little more than funnels to third-party paid services, advertise themselves as being able to bypass AI content detection tools such as Turnitin and Copyleaks.

18

Academics warn of facial recognition technology’s ‘chilling effects’ and potential social harm

Concerns about the potential use of facial recognition technology (FRT) by An Garda Síochána have been raised by a number of specialist academics who warned of its “chilling effects” and potential social harms.

An Oireachtas committee hearing expert insights on the technology was also told of likely legal challenges should its use be adopted in the context of current draft legislation, and of the importance of judicial, rather than police, oversight.

“If Ireland goes ahead with this technology it’s just a matter of time before we become a headline,” said Dr Abeba Birhane of the School of Computer Science and Statistics at Trinity College Dublin, who warned of its tendency to err along minority group lines, a common criticism of FRT systems.

Dr Daragh Murray, senior lecturer at the School of Law in Queen Mary University of London, sought to highlight the potential “chilling effects”.

“Facial recognition represents a step-change in police surveillance capability,” he said, telling the committee he advocated for a moratorium on its use.

“Both live and retrospective facial recognition makes possible the ability to monitor, track and profile large amounts of the population with significant private life implications.”

The extent of proposed FRT use by An Garda Síochána was played down by its chief information officer Andrew O’Sullivan who told the committee it would be retrospective, and with human involvement, rather than autonomous, machine-led results in real time.

“We have absolutely no intention of doing real time facial recognition; we also have no time at all for autonomous machine decision making,” he said. “Every decision [during investigations] will be made by a person.”

33

The Rise of Techno-authoritarianism

In the behavior of tech companies, the Atlantic’s executive editor warns us about “a clear and coherent ideology that is seldom called out for what it is: authoritarian technocracy. As the most powerful companies in Silicon Valley have matured, this ideology has only grown stronger, more self-righteous, more delusional, and — in the face of rising criticism — more aggrieved.”

The new technocrats are ostentatious in their use of language that appeals to Enlightenment values — reason, progress, freedom — but in fact they are leading an antidemocratic, illiberal movement. Many of them profess unconditional support for free speech, but are vindictive toward those who say things that do not flatter them. They tend to hold eccentric beliefs…. above all, that their power should be unconstrained. The systems they’ve built or are building — to rewire communications, remake human social networks, insinuate artificial intelligence into daily life, and more — impose these beliefs on the population, which is neither consulted nor, usually, meaningfully informed. All this, and they still attempt to perpetuate the absurd myth that they are the swashbuckling underdogs.

The article calls out Marc Andreessen’s Techno-Optimist Manifesto for saying “We believe in adventure… rebelling against the status quo, mapping uncharted territory, conquering dragons, and bringing home the spoils for our community…” (The Atlantic concludes Andreessen’s position “serves only to absolve him and the other Silicon Valley giants of any moral or civic duty to do anything but make new things that will enrich them, without consideration of the social costs, or of history.”)

The article notes that Andreessen “also identifies a list of enemies and ‘zombie ideas’ that he calls upon his followers to defeat, among them ‘institutions’ and ‘tradition.'” But the Atlantic makes a broader critique not just of Andreessen but of other Silicon Valley elites. “The world that they have brought into being over the past two decades is unquestionably a world of reckless social engineering, without consequence for its architects, who foist their own abstract theories and luxury beliefs on all of us…”

None of this happens without the underlying technocratic philosophy of inevitability — that is, the idea that if you can build something new, you must. “In a properly functioning world, I think this should be a project of governments,” [Sam] Altman told my colleague Ross Andersen last year, referring to OpenAI’s attempts to develop artificial general intelligence. But Altman was going to keep building it himself anyway. Or, as Zuckerberg put it to The New Yorker many years ago: “Isn’t it, like, inevitable that there would be a huge social network of people? … If we didn’t do this someone else would have done it.”

The article includes this damning chat log from a 2004 conversation Zuckerberg had with a friend:

Zuckerberg: If you ever need info about anyone at Harvard.
Zuckerberg: Just ask.
Zuckerberg: I have over 4,000 emails, pictures, addresses, SNS
Friend: What? How’d you manage that one?
Zuckerberg: People just submitted it.
Zuckerberg: I don’t know why.
Zuckerberg: They “trust me”
Zuckerberg: Dumb fucks.’

But the article also reminds us that in Facebook’s early days, “Zuckerberg listed ‘revolutions’ among his interests.”

The main dangers of authoritarian technocracy are not at this point political, at least not in the traditional sense. Still, a select few already have authoritarian control, more or less, to establish the digital world’s rules and cultural norms, which can be as potent as political power…

[I]n recent years, it has become clear that regulation is needed, not least because the rise of technocracy proves that Silicon Valley’s leaders simply will not act in the public’s best interest. Much should be done to protect children from the hazards of social media, and to break up monopolies and oligopolies that damage society, and more. At the same time, I believe that regulation alone will not be enough to meaningfully address the cultural rot that the new technocrats are spreading…. We do not have to live in the world the new technocrats are designing for us. We do not have to acquiesce to their growing project of dehumanization and data mining. Each of us has agency.

No more “build it because we can.” No more algorithmic feedbags. No more infrastructure designed to make the people less powerful and the powerful more controlling. Every day we vote with our attention; it is precious, and desperately wanted by those who will use it against us for their own profit and political goals. Don’t let them.

The article specifically recommends “challenging existing norms about the use of apps and YouTube in classrooms, the ubiquity of smartphones in adolescent hands, and widespread disregard for individual privacy. People who believe that we all deserve better will need to step up to lead such efforts.”
“Universities should reclaim their proper standing as leaders in developing world-changing technologies for the good of humankind. (Harvard, Stanford, and MIT could invest in creating a consortium for such an effort — their endowments are worth roughly $110 billion combined.)”

38

Over 2 percent of the US’s electricity generation now goes to Bitcoin

While its analysis is preliminary, the Energy Information Agency (EIA) estimates that large-scale cryptocurrency operations are now consuming over 2 percent of the US’s electricity. That’s roughly the equivalent of having added an additional state to the grid over just the last three years.

34

Almost Half of British Teens Feel Addicted To Social Media, Study Says

The latest research, by Dr Amy Orben’s team at the University of Cambridge, used data from the Millennium Cohort study which is tracking the lives of about 19,000 people born in 2000-2002 across England, Scotland, Wales and Northern Ireland. When the cohort were aged 16-18 they were asked, for the first time, about social media use. Of the 7,000 people who responded, 48% said they agreed or strongly agreed with the statement “I think I am addicted to social media.” A higher proportion of girls (57%) agreed compared to boys (37%), according to the data shared with the Guardian.

Scientists said this did not mean that these people are actually suffering from a clinical addiction, but that expressing a perceived lack of control suggests a problematic relationship. “We’re not saying the people who say they feel addicted are addicted,” said Georgia Turner, a graduate student leading the analysis. “Self-perceived social media addiction is not [necessarily] the same as drug addiction. But it’s not a nice feeling to feel you don’t have agency over your own behavior. It’s quite striking that so many people feel like that and it can’t it be that good.”

“Social media research has largely assumed that [so-called] social media addiction is going to follow the same framework as drug addiction,” said Turner. Orben’s team and others argue that this is likely to be oversimplistic and are investigating whether the teenagers cluster into groups whose behavioral can be predicted by other personality traits. It could be that, for some, their relationship is akin to a behavioral addiction, but for others their use could be driven by compulsive checking, others may be relying on it to cope with negative life experiences, and others may simply be responding to negative social perceptions about “wasting time” on social media.

56

Robot Crushes Man To Death After Misidentifying Him As a Box

A robot in a South Korea distribution center crushed a man to death after the machine apparently failed to differentiate him from the boxes of produce it was handling. The Guardian reports:
The man, a robotics company worker in his 40s, was inspecting the robot’s sensor operations at a distribution centre for agricultural produce in South Gyeongsang province. The industrial robot, which was lifting boxes filled with bell peppers and placing them on a pallet, appears to have malfunctioned and identified the man as a box, Yonhap reported, citing the police. The robotic arm pushed the man’s upper body down against the conveyor belt, crushing his face and chest, according to Yonhap. He was transferred to the hospital but died later, the report said.
The BBC notes that the man was “checking the robot’s sensor operations ahead of its test run […] scheduled for November 8.” It was originally planned for November 6th, “but was pushed back by two days due to problems with the robot’s sensor,” the report adds.

39

Court Rules Automakers Can Record and Intercept Owner Text Messages

A federal judge on Tuesday refused to bring back a class action lawsuit alleging four auto manufacturers had violated Washington state’s privacy laws by using vehicles’ on-board infotainment systems to record and intercept customers’ private text messages and mobile phone call logs. The Seattle-based appellate judge ruled that the practice does not meet the threshold for an illegal privacy violation under state law, handing a big win to automakers Honda, Toyota, Volkswagen and General Motors, which are defendants in five related class action suits focused on the issue. One of those cases, against Ford, had been dismissed on appeal previously.

The plaintiffs in the four live cases had appealed a prior judge’s dismissal. But the appellate judge ruled Tuesday that the interception and recording of mobile phone activity did not meet the Washington Privacy Act’s standard that a plaintiff must prove that “his or her business, his or her person, or his or her reputation” has been threatened. In an example of the issues at stake, plaintiffs in one of the five cases filed suit against Honda in 2021, arguing that beginning in at least 2014 infotainment systems in the company’s vehicles began downloading and storing a copy of all text messages on smartphones when they were connected to the system. An Annapolis, Maryland-based company, Berla Corporation, provides the technology to some car manufacturers but does not offer it to the general public, the lawsuit said. Once messages are downloaded, Berla’s software makes it impossible for vehicle owners to access their communications and call logs but does provide law enforcement with access, the lawsuit said.

64

Generative AI Already Taking White Collar Jobs and Wages in Online Freelancing World

In an ingenious study published this summer, US researchers showed that within a few months of the launch of ChatGPT, copywriters and graphic designers on major online freelancing platforms saw a significant drop in the number of jobs they got, and even steeper declines in earnings. This suggested not only that generative AI was taking their work, but also that it devalues the work they do still carry out.

Most strikingly, the study found that freelancers who previously had the highest earnings and completed the most jobs were no less likely to see their employment and earnings decline than other workers. If anything, they had worse outcomes. In other words, being more skilled was no shield against loss of work or earnings. But the online freelancing market covers a very particular form of white-collar work and of labour market. What about looking higher up the ranks of the knowledge worker class? For that, we can turn to a recent, fascinating Harvard Business School study, which monitored the impact of giving GPT-4, OpenAI’s latest and most advanced offering, to employees at Boston Consulting Group.

30

Is India Setting a ‘Global Standard’ for Online Censorship of Social Media?

With 1.4 billion people, India is the second most-populous country in the world.

But a new article in the Washington Post alleges that India has “set a global standard for online censorship.” For years, a committee of executives from U.S. technology companies and Indian officials convened every two weeks in a government office to negotiate what could — and could not — be said on Twitter, Facebook and YouTube. At the “69A meetings,” as the secretive gatherings were informally called, officials from India’s information, technology, security and intelligence agencies presented social media posts they wanted removed, citing threats to India’s sovereignty and national security, executives and officials who were present recalled. The tech representatives sometimes pushed back in the name of free speech…

But two years ago, these interactions took a fateful turn. Where officials had once asked for a handful of tweets to be removed at each meeting, they now insisted that entire accounts be taken down, and numbers were running in the hundreds. Executives who refused the government’s demands could now be jailed, their companies expelled from the Indian market. New regulations had been adopted that year to hold tech employees in India criminally liable for failing to comply with takedown requests, a provision that executives referred to as a “hostage provision.” After authorities dispatched anti-terrorism police to Twitter’s New Delhi office, Twitter whisked its top India executive out of the country, fearing his arrest, former company employees recounted.

Indian officials say they have accomplished something long overdue: strengthening national laws to bring disobedient foreign companies to heel… Digital and human rights advocates warn that India has perfected the use of regulations to stifle online dissent and already inspired governments in countries as varied as Nigeria and Myanmar to craft similar legal frameworks, at times with near-identical language. India’s success in taming internet companies has set off “regulatory contagion” across the world, according to Prateek Waghre, a policy director at India’s Internet Freedom Foundation…

Despite the huge size of China’s market, companies like Twitter and Facebook were forced to steer clear of the country because Beijing’s rules would have required them to spy on users. That left India as the largest potential growth market. Silicon Valley companies were already committed to doing business in India before the government began to tighten its regulations, and today say they have little choice but to obey if they want to remain there.
The Post spoke to Rajeev Chandrasekhar, the deputy technology minister in the BJP government who oversees many of the new regulations, who argued “The shift was really simple: We’ve defined the laws, defined the rules, and we have said there is zero tolerance to any noncompliance with the Indian law…

“You don’t like the law? Don’t operate in India,” Chandrasekhar added. “There is very little wiggle room.”

64

Will ‘News Influencers’ Replace Traditional Media?

The Washington Post looks at the “millions of independent creators reshaping how people get their news, especially the youngest viewers.” News consumption hit a tipping point around the globe during the early days of the coronavirus pandemic, with more people turning to social media platforms such as TikTok, YouTube and Instagram than to websites maintained by traditional news outlets, according to the latest Digital News Report by the Reuters Institute for the Study of Journalism. One in 5 adults under 24 use TikTok as a source for news, the report said, up five percentage points from last year. According to Britain’s Office of Communications, young adults in the United Kingdom now spend more time watching TikTok than broadcast television. This shift has been driven in part by a desire for “more accessible, informal, and entertaining news formats, often delivered by influencers rather than journalists,” the Reuters Institute report says, adding that consumers are looking for news that “feels more relevant….”

While a few national publications such as the New York Times and The Washington Post have seen their digital audiences grow, allowing them to reach hundreds of thousands more readers than they did a decade ago, the economics of journalism have shifted. Well-known news outlets have seen a decline in the amount of traffic flowing to them from social media sites, and some of the money that advertisers previously might have spent with them is now flowing to creators. Even some outlets that began life on the internet have struggled, with BuzzFeed News shuttering in April, Vice entering into bankruptcy and Gawker shutting down for a second time in February. The trend is likely to continue. “There are no reasonable grounds for expecting that those born in the 2000s will suddenly come to prefer old-fashioned websites, let alone broadcast and print, simply because they grow older,” Reuters Institute Director Rasmus Kleis Nielsen said in the report, which is based on an online survey of roughly 94,000 adults in 46 national markets, including the United States…

While many online news creators are, like Al-Khatahtbeh, trained journalists collecting new information, others are aggregators and partisan commentators sometimes masquerading as journalists. The transformation has made the public sphere much more “chaotic and contradictory,” said Jay Rosen, an associate professor of journalism at New York University and author of the PressThink blog, adding that it has never been easier to be both informed and misinformed about world events. “The internet makes possible much more content, and reaching all kinds of people,” Rosen said. “But it also makes disinformation spread.”
The article notes that “some content creators don’t follow the same ethical guidelines that are guideposts in more traditional newsrooms, especially creators who seek to build audiences based on outrage.”

The article also points out that “The ramifications for society are still coming into focus.”

78