Teen Dies After Intense Bond with Character.AI Chatbot

A Florida teenager who formed a deep emotional bond with an AI chatbot took his own life after months of intense daily interactions on Character.AI, a leading AI companion platform. Sewell Setzer III, 14, exchanged his final messages with “Dany,” an AI character based on a Game of Thrones figure, before dying by suicide on February 28. His mother, The New York Times reports, plans to file a lawsuit against Character.AI, alleging the platform’s “dangerous and untested” technology led to his death.

Character.AI, valued at $1 billion and claiming 20 million users, in response said it would implement new safety features for minors, including time limits and expanded trigger warnings for self-harm discussions. The company’s head of trust and safety Jerry Ruoti said they “take user safety very seriously.”

28

Polish Radio Station Replaces Journalists With AI ‘Presenters’

OFF Radio Krakow sparked controversy by replacing its journalists with AI-generated presenters in an experiment to attract younger audiences. CNN Business reports:
Weeks after letting its journalists go, OFF Radio Krakow relaunched this week, with what it said was âoethe first experiment in Poland in which journalists … are virtual characters created by AI.” The station in the southern city of Krakow said its three avatars are designed to reach younger listeners by speaking about cultural, art and social issues including the concerns of LGBTQ+ people. “Is artificial intelligence more of an opportunity or a threat to media, radio and journalism? We will seek answers to this question,” the station head, Marcin Pulit, wrote in a statement.

32

Generative AI Doesn’t Have a Coherent Understanding of the World, MIT Researchers Find

Despite its impressive output, a recent study from MIT suggests generative AI doesn’t have a coherent understanding of the world. While the best-performing large language models have surprising capabilities that make it seem like the models are implicitly learning some general truths about the world, that isn’t necessarily the case. The recent paper showed that Large Language Models and game-playing AI implicitly model the world, but the models are flawed and incomplete.

An example study showed that a popular type of generative AI model accurately provided turn-by-turn driving directions in New York City, without having formed an accurate internal map of the city. Though the model can still navigate effectively, when the researchers closed some streets and added detours, its performance plummeted. And when they dug deeper, the researchers found that the New York maps the model implicitly generated had many nonexistent streets curving between the grid and connecting far away intersections.

25

Power Shortages Will Restrict 40% of AI Data Centers By 2027

40% of AI data centers will face operational constraints due to power shortages by 2027 as AI drives unprecedented energy consumption, research firm Gartner said on Tuesday.

Data center power requirements for AI-optimized servers are projected to reach 500 terawatt-hours annually by 2027, more than double 2023 levels, as companies rapidly expand facilities to handle large language model training and implementation.

The surge in power demand will outpace utility providers’ ability to expand capacity, Gartner analyst Bob Johnson said, leading to higher electricity costs that will cascade through the AI industry. Some operators are already seeking direct agreements with power producers to secure guaranteed supply.

29

Are We Entering an AI Price-Fixing Dystopia?

“Algorithmic price-fixing appears to be spreading to more and more industries,” warns the Atlantic. “And existing laws may not be equipped to stop it.”

They start with RealPage’s rental-property software (pointing out that “a series of lawsuits says it’s something else: an AI-enabled price-fixing conspiracy” and “The lawsuits also argue that RealPage pressures landlords to comply with its pricing suggestions.”) But the most important point is that RealPage isn’t the only company doing this:
Its main competitor, Yardi, is involved in a similar lawsuit. One of RealPage’s subsidiaries, a service called Rainmaker, faces multiple legal challenges for allegedly facilitating price-fixing in the hotel industry. (Yardi and Rainmaker deny wrongdoing.) Similar complaints have been brought against companies in industries as varied as health insurance, tire manufacturing, and meat processing. But winning these cases is proving difficult.

The article notes that “Agreeing to fix prices is punishable with up to 10 years in prison and a $100 million fine.” But it also notes concerns that algorithms could produce price-fixing-like behavior that’s “almost impossible to prosecute under existing antitrust laws. Price-fixing, in other words, has entered the algorithmic age, but the laws designed to prevent it have not kept up.”
Last week, San Francisco passed a first-of-its-kind ordinance banning “both the sale and use of software which combines non-public competitor data to set, recommend or advise on rents and occupancy levels.”

Whether other jurisdictions follow suit remains to be seen.

In the meantime, more and more companies are figuring out ways to use algorithms to set prices. If these really do enable de facto price-fixing, and manage to escape legal scrutiny, the result could be a kind of pricing dystopia in which competition to create better products and lower prices would be replaced by coordination to keep prices high and profits flowing. That would mean permanently higher costs for consumers — like an inflation nightmare that never ends.

75

Where Facebook’s AI Slop comes from

“The post you are seeing now is of a poor man that is being used to generate revenue,” he says in Hindi, pointing with his pen to an image of a skeletal elderly man hunched over being eaten by hundreds of bugs. “The Indian audience is very emotional. After seeing photos like this, they Like, Comment and share them. So you too should create a page like this, upload photos and make money through Performance bonus.”

71

Data Centers Are Consuming Electricity Supplies and Hurting the Environment

Data center construction “could delay California’s transition away from fossil fuels and raise electric bills for everyone else,” warns the Los Angeles Times — and also increase the risk of blackouts:
Even now, California is at the verge of not having enough power. An analysis of public data by the nonprofit GridClue ranks California 49th of the 50 states in resilience — or the ability to avoid blackouts by having more electricity available than homes and businesses need at peak hours… The state has already extended the lives of Pacific Gas & Electric Co.’s Diablo Canyon nuclear plant as well as some natural gas-fueled plants in an attempt to avoid blackouts on sweltering days when power use surges… “I’m just surprised that the state isn’t tracking this, with so much attention on power and water use here in California,” said Shaolei Ren, associate professor of electrical and computer engineering at UC Riverside. Ren and his colleagues calculated that the global use of AI could require as much fresh water in 2027 as that now used by four to six countries the size of Denmark.

Driving the data center construction is money. Today’s stock market rewards companies that say they are investing in AI. Electric utilities profit as power use rises. And local governments benefit from the property taxes paid by data centers.

The article notes a Goldman Sachs estimate that by 2030, data centers could consume up to 11% of all U.S. power demand — up from 3% now. And it shows how the sprawling build-out of data centers across America is impacting surrounding communities:

The article notes that California’s biggest concentration of data centers — more than 50 near the Silicon Valley city of Santa Clara — are powered by a utility emitting “more greenhouse gas than the average California electric utility because 23% of its power for commercial customers comes from gas-fired plants. Another 35% is purchased on the open market where the electricity’s origin can’t be traced.” Consumer electric rates are rising “as the municipal utility spends heavily on transmission lines and other infrastructure,” while the data centers now consume 60% of the city’s electricity.

Energy officials in northern Virginia “have proposed a transmission line to shore up the grid that would depend on coal plants that had been expected to be shuttered.”

In 2022 an Oregon newspaper discovered Google data centers were consuming 29% of one city’s water supply.

“Earlier this year, Pacific Gas & Electric told investors that its customers have proposed more than two dozen data centers, requiring 3.5 gigawatts of power — the output of three new nuclear reactors.”

68

Google funds newsrooms, AI

Google has reached a groundbreaking deal with California lawmakers to contribute millions to local newsrooms, aiming to support journalism amid its decline as readers migrate online and advertising dollars evaporate. The agreement also includes a controversial provision for artificial intelligence funding. Politico reports:
California emulated a strategy that other countries like Canada have used to try and reverse the journalism industry’s decline as readership migrated online and advertising dollars evaporated. […] Under the deal, the details of which were first reported by POLITICO on Monday, Google and the state of California would jointly contribute a minimum of $125 million over five years to support local newsrooms through a nonprofit public charity housed at UC Berkeley’s journalism school. Google would contribute at least $55 million, and state officials would kick in at least $70 million. The search giant would also commit $50 million over five years to unspecified “existing journalism programs.”

The deal would also steer millions in tax-exempt private dollars toward an artificial intelligence initiative that people familiar with the negotiations described as an effort to cultivate tech industry buy-in. Funding for artificial intelligence was not included in the bill at the core of negotiations, authored by Assemblymember Buffy Wicks. The agreement has drawn criticism from a journalists’ union that had so far championed Wicks’ effort. Media Guild of the West President Matt Pearce in an email to union members Sunday evening said such a deal would entrench “Google’s monopoly power over our newsrooms.”
“This public-private partnership builds on our long history of working with journalism and the local news ecosystem in our home state, while developing a national center of excellence on AI policy,” said Kent Walker, chief legal officer for Alphabet, the parent company of Google.

Media Guild of the West President Matt Pearce wasn’t so chipper. He criticized the plan in emails with union members, calling it a “total rout of the state’s attempts to check Google’s stranglehold over our newsrooms.”

80

AI To Go Nuclear? Data Center Deals Say It’s Inevitable

To build the massive datacenters generative AI requires, major companies like Amazon and Microsoft “are going nuclear,” reports CIO magazine. AWS:
Earlier this year, AWS paid $650 million to purchase Talen Energy’s Cumulus Data Assets, a 960-megawatt nuclear-powered data center on site at Talen’s Susquehanna, Pennsylvania, nuclear plant, with additional data centers planned — pending approval by the Nuclear Regulatory Agency… In addition to its purchase of the Cumulus data center, AWS will have access to nuclear energy as part of a 10-year Power Purchase Agreement (PPA) from the Susquehanna site.

Microsoft:
Last year, Constellation signed a deal giving Microsoft the rights to receive up to 35% of its power from nuclear sources in addition to its existing solar and wind purchases from Constellation for Microsoft’s Boydton, Va., data center. Microsoft has also signed a nuclear carbon credits deal with Ontario Power Generation for its operations in Canada.

The broader industry:
Many of the deals under discussion are with existing nuclear power providers for hyperscalars [large-scale datacenters] to access energy or to employ small module nuclear reactors (SMRs) with smaller carbon footprints that will be annexed to existing nuclear power plants. Nucor, Oklo, Rolls-Royce SMR, Westinghouse Electric, Moltex Energy, Terrestrial Energy, General Electric, Hitachi Nuclear Energy, and X-energy are among the roster of companies with SMRs under development to meet the growing needs of AI data centers…

One energy analyst does not expect nuclear SMRs to be operational until 2030, yet he and many others acknowledge the need for sustainable, carbon-free alternatives to electricity, wind, and solar is very pressing. “Today’s electric grids are struggling to keep up with demand, even as datacenter companies are planning huge new additions to their fleets to power generative AI applications. As a result, companies like Google, Amazon, and Microsoft are increasingly taking matters into their own hands and getting creative. They are now looking at on-site nuclear-based SMRs, and even fusion reactors,” says Peter Kelly-Detwiler, principal of Northbridge Energy Partners. “This global arms race for power arose pretty quickly, and it’s like nothing we have ever seen before.”

76

Taser Company Axon Is Selling AI That Turns Body Cam Audio Into Police Reports

Axon on Tuesday announced a new tool called Draft One that uses artificial intelligence built on OpenAI’s GPT-4 Turbo model to transcribe audio from body cameras and automatically turn it into a police report. Axon CEO Rick Smith told Forbes that police officers will then be able to review the document to ensure accuracy. From the report:
Axon claims one early tester of the tool, Fort Collins Colorado Police Department, has seen an 82% decrease in time spent writing reports. “If an officer spends half their day reporting, and we can cut that in half, we have an opportunity to potentially free up 25% of an officer’s time to be back out policing,” Smith said. These reports, though, are often used as evidence in criminal trials, and critics are concerned that relying on AI could put people at risk by depending on language models that are known to “hallucinate,” or make things up, as well as display racial bias, either blatantly or unconsciously.

“It’s kind of a nightmare,” said Dave Maass, surveillance technologies investigations director at the Electronic Frontier Foundation. “Police, who aren’t specialists in AI, and aren’t going to be specialists in recognizing the problems with AI, are going to use these systems to generate language that could affect millions of people in their involvement with the criminal justice system. What could go wrong?” Smith acknowledged there are dangers. “When people talk about bias in AI, it really is: Is this going to exacerbate racism by taking training data that’s going to treat people differently?” he told Forbes. “That was the main risk.”

Smith said Axon is recommending police don’t use the AI to write reports for incidents as serious as a police shooting, where vital information could be missed. “An officer-involved shooting is likely a scenario where it would not be used, and I’d probably advise people against it, just because there’s so much complexity, the stakes are so high.” He said some early customers are only using Draft One for misdemeanors, though others are writing up “more significant incidents,” including use-of-force cases. Axon, however, won’t have control over how individual police departments use the tools.

147

AI Will Suck Up 500% More Power in UK in 10 Years, Grid CEO Says

Electricity demand from UK data centers will jump sixfold over the next 10 years as a boom in AI requires increased computing power, according to the head of National Grid.

That will ramp up pressure on the country’s electricity network, which must move vast quantities of renewable energy from as far away as Scottish wind farms to data centers around London. And it’s a grid already under strain from the accelerating electrification of home heating, transportation and industries.

“Future growth in foundational technologies like artificial intelligence and quantum computing will mean larger-scale, energy-intensive computing infrastructure,” National Grid Chief Executive Officer John Pettigrew said Tuesday at a conference in Oxford. It’s an outlook replicated in many other countries, which are grappling with how to fund the massive spending required to expand capacity. Global electricity demand from data centers, AI and cryptocurrencies may more than double over the next three years, according to International Energy Agency forecasts.

146

How AI is Taking Water From the Desert

Microsoft built two datacenters west of Phoenix, with plans for seven more (serving, among other companies, OpenAI). “Microsoft has been adding data centers at a stupendous rate, spending more than $10 billion on cloud-computing capacity in every quarter of late,” writes the Atlantic. “One semiconductor analyst called this “the largest infrastructure buildout that humanity has ever seen.”

But is this part of a concerning trend?
Microsoft plans to absorb its excess heat with a steady flow of air and, as needed, evaporated drinking water. Use of the latter is projected to reach more than 50 million gallons every year. That might be a burden in the best of times. As of 2023, it seemed absurd. Phoenix had just endured its hottest summer ever, with 55 days of temperatures above 110 degrees. The weather strained electrical grids and compounded the effects of the worst drought the region has faced in more than a millennium. The Colorado River, which provides drinking water and hydropower throughout the region, has been dwindling. Farmers have already had to fallow fields, and a community on the eastern outskirts of Phoenix went without tap water for most of the year… [T]here were dozens of other facilities I could visit in the area, including those run by Apple, Amazon, Meta, and, soon, Google. Not too far from California, and with plenty of cheap land, Greater Phoenix is among the fastest-growing hubs in the U.S. for data centers….

Microsoft, the biggest tech firm on the planet, has made ambitious plans to tackle climate change. In 2020, it pledged to be carbon-negative (removing more carbon than it emits each year) and water-positive (replenishing more clean water than it consumes) by the end of the decade. But the company also made an all-encompassing commitment to OpenAI, the most important maker of large-scale AI models. In so doing, it helped kick off a global race to build and deploy one of the world’s most resource-intensive digital technologies. Microsoft operates more than 300 data centers around the world, and in 2021 declared itself “on pace to build between 50 and 100 new datacenters each year for the foreseeable future….”

Researchers at UC Riverside estimated last year… that global AI demand could cause data centers to suck up 1.1 trillion to 1.7 trillion gallons of freshwater by 2027. A separate study from a university in the Netherlands, this one peer-reviewed, found that AI servers’ electricity demand could grow, over the same period, to be on the order of 100 terawatt hours per year, about as much as the entire annual consumption of Argentina or Sweden… [T]ensions over data centers’ water use are cropping up not just in Arizona but also in Oregon, Uruguay, and England, among other places in the world.

The article points out that Microsoft “is transitioning some data centers, including those in Arizona, to designs that use less or no water, cooling themselves instead with giant fans.” And an analysis (commissioned by Microsoft) on the impact of one building said it would use about 56 million gallons of drinking water each year, equivalent to the amount used by 670 families, according to the article. “In other words, a campus of servers pumping out ChatGPT replies from the Arizona desert is not about to make anyone go thirsty.”

171

Public trust in AI is sinking across the board

Globally, trust in AI companies has dropped to 53%, down from 61% five years ago. In the U.S., trust has dropped 15 percentage points (from 50% to 35%) over the same period. Trust in AI is low across political lines. Democrats trust in AI companies is 38%, independents are at 25% and Republicans at 24%. Tech is losing its lead as the most trusted sector. Eight years ago, technology was the leading industry in trust in 90% of the countries Edelman studies. Today, it is the most trusted in only half of countries.

People in developing countries are more likely to embrace AI than those in developed ones. Respondents in France, Canada, Ireland, UK, U.S., Germany, Australia, the Netherlands and Sweden reject the growing use of AI by a three-to-one margin, Edelman said. By contrast, acceptance outpaces resistance by a wide margin in developing markets such as Saudi Arabia, India, China, Kenya, Nigeria and Thailand.

154

OpenAI’s Chatbot Store is Filling Up With Spam

When OpenAI CEO Sam Altman announced GPTs, custom chatbots powered by OpenAI’s generative AI models, onstage at the company’s first-ever developer conference in November, he described them as a way to “accomplish all sorts of tasks” — from programming to learning about esoteric scientific subjects to getting workout pointers. “Because [GPTs] combine instructions, expanded knowledge and actions, they can be more helpful to you,” Altman said. “You can build a GPT … for almost anything.” He wasn’t kidding about the anything part.

TechCrunch found that the GPT Store, OpenAI’s official marketplace for GPTs, is flooded with bizarre, potentially copyright-infringing GPTs that imply a light touch where it concerns OpenAI’s moderation efforts. A cursory search pulls up GPTs that purport to generate art in the style of Disney and Marvel properties, serve as little more than funnels to third-party paid services, advertise themselves as being able to bypass AI content detection tools such as Turnitin and Copyleaks.

143

Generative AI Already Taking White Collar Jobs and Wages in Online Freelancing World

In an ingenious study published this summer, US researchers showed that within a few months of the launch of ChatGPT, copywriters and graphic designers on major online freelancing platforms saw a significant drop in the number of jobs they got, and even steeper declines in earnings. This suggested not only that generative AI was taking their work, but also that it devalues the work they do still carry out.

Most strikingly, the study found that freelancers who previously had the highest earnings and completed the most jobs were no less likely to see their employment and earnings decline than other workers. If anything, they had worse outcomes. In other words, being more skilled was no shield against loss of work or earnings. But the online freelancing market covers a very particular form of white-collar work and of labour market. What about looking higher up the ranks of the knowledge worker class? For that, we can turn to a recent, fascinating Harvard Business School study, which monitored the impact of giving GPT-4, OpenAI’s latest and most advanced offering, to employees at Boston Consulting Group.

89

Microsoft Needs So Much Power to Train AI That It’s Considering Small Nuclear Reactors

Training large language models is an incredibly power-intensive process that has an immense carbon footprint. Keeping data centers running requires a ludicrous amount of electricity that could generate substantial amounts of greenhouse emissions — depending, of course, on the energy’s source. Now, the Verge reports, Microsoft is betting so big on AI that its pushing forward with a plan to power them using nuclear reactors. Yes, you read that right; a recent job listing suggests the company is planning to grow its energy infrastructure with the use of small modular reactors (SMR)…

But before Microsoft can start relying on nuclear power to train its AIs, it’ll have plenty of other hurdles to overcome. For one, it’ll have to source a working SMR design. Then, it’ll have to figure out how to get its hands on a highly enriched uranium fuel that these small reactors typically require, as The Verge points out. Finally, it’ll need to figure out a way to store all of that nuclear waste long term…

Other than nuclear fission, Microsoft is also investing in nuclear fusion, a far more ambitious endeavor, given the many decades of research that have yet to lead to a practical power system. Nevertheless, the company signed a power purchase agreement with Helion, a fusion startup founded by OpenAI CEO Sam Altman earlier this year, with the hopes of buying electricity from it as soon as 2028.

185

Internet-Connected Cars Fail Privacy and Security Tests

Mozilla found brands including BMW, Ford, Toyota, Tesla, and Subaru collect data about drivers including race, facial expressions, weight, health information, and where you drive. Some of the cars tested collected data you wouldn’t expect your car to know about, including details about sexual activity, race, and immigration status, according to Mozilla. […] The worst offender was Nissan, Mozilla said. The carmaker’s privacy policy suggests the manufacturer collects information including sexual activity, health diagnosis data, and genetic data, though there’s no details about how exactly that data is gathered. Nissan reserves the right to share and sell “preferences, characteristics, psychological trends, predispositions, behavior, attitudes, intelligence, abilities, and aptitudes” to data brokers, law enforcement, and other third parties.

Other brands didn’t fare much better. Volkswagen, for example, collects your driving behaviors such as your seatbelt and braking habits and pairs that with details such as age and gender for targeted advertising. Kia’s privacy policy reserves the right to monitor your “sex life,” and Mercedes-Benz ships cars with TikTok pre-installed on the infotainment system, an app that has its own thicket of privacy problems. The privacy and security problems extend beyond the nature of the data car companies siphon off about you. Mozilla said it was unable to determine whether the brands encrypt any of the data they collect, and only Mercedes-Benz responded to the organization’s questions.

Mozilla also found that many car brands engage in “privacy washing,” or presenting consumers with information that suggests they don’t have to worry about privacy issues when the exact opposite is true. Many leading manufacturers are signatories to the Alliance for Automotive Innovation’s “Consumer Privacy Protection Principles (PDF).” According to Mozilla, these are a non-binding set of vague promises organized by the car manufacturers themselves. Questions around consent are essentially a joke as well. Subaru, for example, says that by being a passenger in the car, you are considered a “user” who has given the company consent to harvest information about you. Mozilla said a number of car brands say it’s the drivers responsibility to let passengers know about their car’s privacy policies — as if the privacy policies are comprehensible to drivers in the first place. Toyota, for example, has a constellation of 12 different privacy policies for your reading pleasure.

172

‘Extremely Remorseful’ Lawyers Confronted by Judge Over ‘Legal Gibberish’ Citations from ChatGPT

Two apologetic lawyers responding to an angry judge in Manhattan federal court blamed ChatGPT Thursday for tricking them into including fictitious legal research in a court filing… [Attorney Steven A. Schwartz] told U.S. District Judge P. Kevin Castel he was “operating under a misconception … that this website was obtaining these cases from some source I did not have access to.” He said he “failed miserably” at doing follow-up research to ensure the citations were correct.

“I did not comprehend that ChatGPT could fabricate cases,” Schwartz said…

The judge confronted Schwartz with one legal case invented by the computer program. It was initially described as a wrongful death case brought by a woman against an airline only to morph into a legal claim about a man who missed a flight to New York and was forced to incur additional expenses. “Can we agree that’s legal gibberish?” Castel asked.

Schwartz said he erroneously thought that the confusing presentation resulted from excerpts being drawn from different parts of the case. When Castel finished his questioning, he asked Schwartz if he had anything else to say. “I would like to sincerely apologize,” Schwartz said. He added that he had suffered personally and professionally as a result of the blunder and felt “embarrassed, humiliated and extremely remorseful.”

He said that he and the firm where he worked — Levidow, Levidow & Oberman — had put safeguards in place to ensure nothing similar happens again.

An attorney for the law firm also told the judge that lawyers have historically had a hard time with technology, particularly new technology. “And it’s not getting easier.”

298

The AI firm that conducted ‘state surveillance’ of social media posts

An industrial estate in Yorkshire is an unlikely location for a state surveillance operation. But these blank-looking warehouses are home to an artificial intelligence (AI) company used by the Government to monitor people’s posts on social media.

Logically has been paid more than £1.2 million of taxpayers’ money to analyse what the Government terms “disinformation” – false information deliberately seeded online – and “misinformation”, which is false information that has been spread inadvertently.

It does this by “ingesting” material from more than hundreds of thousands of media sources and “all public posts on major social media platforms”, using AI to identify those that are potentially problematic.

It also has a “partnership” with Facebook, which appears to grant Logically’s fact-checkers huge influence over the content other people see.

A joint press release issued in July 2021 suggests that Facebook will limit the reach of certain posts if Logically says they are untrue.

“When Logically rates a piece of content as false, Facebook will significantly reduce its distribution so that fewer people see it, apply a warning label to let people know that the content has been rated false, and notify people who try to share it,” states the press release.

282

Microsoft Had to Suspend Its AI Chatbot After It Veered Into White Supremacy

Less than a day after Microsoft launched its new artificial intelligence bot Tay, she has already learned the most important lesson of the internet: Never tweet.

Microsoft reportedly had to suspend Tay from tweeting after she tweeted a series of racist statements, including “Hitler was right I hate the jews.” The company had launched the AI on Wednesday, which was designed to communicate with “18 to 24 year olds in the U.S” and “experiment with and conduct research on conversational understanding.” It appears some of her racist replies were simply regurgitating the statements trolls tweeted at her.

204