Resources

Ring Cameras Are Being Used To Control and Surveil Overworked Delivery Workers

Networked doorbell surveillance cameras like Amazon’s Ring are everywhere, and have changed the nature of delivery work by letting customers take on the role of bosses to monitor, control, and discipline workers, according to a recent report (PDF) by the Data & Society tech research institute. “The growing popularity of Ring and other networked doorbell cameras has normalized home and neighborhood surveillance in the name of safety and security,” Data & Society’s Labor Futures program director Aiha Nguyen and research analyst Eve Zelickson write. “But for delivery drivers, this has meant their work is increasingly surveilled by the doorbell cameras and supervised by customers. The result is a collision between the American ideas of private property and the business imperatives of doing a job.”

Thanks to interviews with surveillance camera users and delivery drivers, the researchers are able to dive into a few major developments interacting here to bring this to a head. Obviously, the first one is the widespread adoption of doorbell surveillance cameras like Ring. Just as important as the adoption of these cameras, however, is the rise of delivery work and its transformation into gig labor. […] As the report lays out, Ring cameras allow customers to surveil delivery workers and discipline their labor by, for example, sharing shaming footage online. This dovetails with the “gigification” of Amazon’s delivery workers in two ways: labor dynamics and customer behavior.

“Gig workers, including Flex drivers, are sold on the promise of flexibility, independence and freedom. Amazon tells Flex drivers that they have complete control over their schedule, and can work on their terms and in their space,” Nguyen and Zelickson write. “Through interviews with Flex drivers, it became apparent that these marketed perks have hidden costs: drivers often have to compete for shifts, spend hours trying to get reimbursed for lost wages, pay for wear and tear on their vehicle, and have no control over where they work.” That competition between workers manifests in other ways too, namely acquiescing to and complying with customer demands when delivering purchases to their homes. Even without cameras, customers have made onerous demands of Flex drivers even as the drivers are pressed to meet unrealistic and dangerous routes alongside unsafe and demanding productivity quotas. The introduction of surveillance cameras at the delivery destination, however, adds another level of surveillance to the gigification. […] The report’s conclusion is clear: Amazon has deputized its customers and made them partners in a scheme that encourages antagonistic social relations, undermines labor rights, and provides cover for a march towards increasingly ambitious monopolistic exploits.

258

Company That Makes Rent-Setting Software For Apartments Accused of Collusion, Lawsuit Says

Renters filed a lawsuit (PDF) this week alleging that a company that makes price-setting software for apartments and nine of the nation’s biggest property managers formed a cartel to artificially inflate rents in violation of federal law. The lawsuit was filed days after ProPublica published an investigation raising concerns that the software, sold by Texas-based RealPage, is potentially pushing rent prices above competitive levels, facilitating price fixing or both. […] RealPage’s software uses an algorithm to churn through a trove of data each night to suggest daily prices for available rental units. The software uses not only information about the apartment being priced and the property where it is located, but also private data on what nearby competitors are charging in rents. The software considers actual rents paid to those rivals — not just what they are advertising, the company told ProPublica.

ProPublica’s investigation found that the software’s design and reach have raised questions among experts about whether it is helping the country’s biggest landlords indirectly coordinate pricing — potentially in violation of federal law. In one neighborhood in downtown Seattle, ProPublica found, 70% of more than 9,000 apartments were controlled by just 10 property managers, who all used RealPage pricing software in at least some of their buildings. RealPage told ProPublica that the company “uses aggregated market data from a variety of sources in a legally compliant manner.” The company also said that landlords who use employees to manually set prices “typically” conduct phone surveys to check competitors’ rents, which the company says could result in anti-competitive behavior. “RealPage’s revenue management solutions prioritize a property’s own internal supply/demand dynamics over external factors such as competitors’ rents,” a company statement said, “and therefore help eliminate the risk of collusion that could occur with manual pricing.”

The lawsuit said that RealPage’s software helps stagger lease renewals to artificially smooth out natural imbalances in supply and demand, which discourages landlords from undercutting pricing achieved by the cartel. Property managers “thus held vacant rental units unoccupied for periods of time (rejecting the historical adage to keep the ‘heads in the beds’) to ensure that, collectively, there is not one period in which the market faces an oversupply of residential real estate properties for lease, keeping prices higher,” it said. Such staggering helped the group avoid “a race to the bottom” on rents, the lawsuit said. RealPage brags that clients — who agree to provide RealPage real-time access to sensitive and nonpublic data — experience “rental rate improvements, year over year, between 5% and 12% in every market,” the lawsuit said. RealPage encourages property companies to have daily calls with a RealPage pricing adviser and discourages deviating from the rent price suggested by the software, the lawsuit said.

269

Has Online Disinformation Splintered and Become More Intractable?

Not long ago, the fight against disinformation focused on the major social media platforms, like Facebook and Twitter. When pressed, they often removed troubling content, including misinformation and intentional disinformation about the Covid-19 pandemic. Today, however, there are dozens of new platforms, including some that pride themselves on not moderating — censoring, as they put it — untrue statements in the name of free speech….

The purveyors of disinformation have also become increasingly sophisticated at sidestepping the major platforms’ rules, while the use of video to spread false claims on YouTube, TikTok and Instagram has made them harder for automated systems to track than text…. A report last month by NewsGuard, an organization that tracks the problem online, showed that nearly 20 percent of videos presented as search results on TikTok contained false or misleading information on topics such as school shootings and Russia’s war in Ukraine. “People who do this know how to exploit the loopholes,” said Katie Harbath, a former director of public policy at Facebook who now leads Anchor Change, a strategic consultancy.

With the [U.S.] midterm elections only weeks away, the major platforms have all pledged to block, label or marginalize anything that violates company policies, including disinformation, hate speech or calls to violence. Still, the cottage industry of experts dedicated to countering disinformation — think tanks, universities and nongovernment organizations — say the industry is not doing enough. The Stern Center for Business and Human Rights at New York University warned last month, for example, that the major platforms continued to amplify “election denialism” in ways that undermined trust in the democratic system.

246

ByteDance Planned to Use TikTok to Monitor Locations of Specific American Citizens

A China-based team at TikTok’s parent company, ByteDance, planned to use the TikTok app to monitor the personal location of some specific American citizens, according to materials reviewed by Forbes.

The team behind the monitoring project — ByteDance’s Internal Audit and Risk Control department — is led by Beijing-based executive Song Ye, who reports to ByteDance cofounder and CEO Rubo Liang. The team primarily conducts investigations into potential misconduct by current and former ByteDance employees. But in at least two cases, the Internal Audit team also planned to collect TikTok data about the location of a U.S. citizen who had never had an employment relationship with the company, the materials show.

It is unclear from the materials whether data about these Americans was actually collected; however, the plan was for a Beijing-based ByteDance team to obtain location data from U.S. users’ devices.

271

Google’s Eric Schmidt Helped Write AI Laws Without Disclosing Investments In AI Startups

About four years ago, former Google CEO Eric Schmidt was appointed to the National Security Commission on Artificial Intelligence by the chairman of the House Armed Services Committee. It was a powerful perch. Congress tasked the new group with a broad mandate: to advise the U.S. government on how to advance the development of artificial intelligence, machine learning and other technologies to enhance the national security of the United States. The mandate was simple: Congress directed the new body to advise on how to enhance American competitiveness on AI against its adversaries, build the AI workforce of the future, and develop data and ethical procedures.

In short, the commission, which Schmidt soon took charge of as chairman, was tasked with coming up with recommendations for almost every aspect of a vital and emerging industry. The panel did far more under his leadership. It wrote proposed legislation that later became law and steered billions of dollars of taxpayer funds to industry he helped build — and that he was actively investing in while running the group. If you’re going to be leading a commission that is steering the direction of government AI and making recommendations for how we should promote this sector and scientific exploration in this area, you really shouldn’t also be dipping your hand in the pot and helping yourself to AI investments. His credentials, however, were impeccable given his deep experience in Silicon Valley, his experience advising the Defense Department, and a vast personal fortune estimated at about $20 billion.

Five months after his appointment, Schmidt made a little-noticed private investment in an initial seed round of financing for a startup company called Beacon, which uses AI in the company’s supply chain products for shippers who manage freight logistics, according to CNBC’s review of investment information in database Crunchbase. There is no indication that Schmidt broke any ethics rules or did anything unlawful while chairing the commission. The commission was, by design, an outside advisory group of industry participants, and its other members included well-known tech executives including Oracle CEO Safra Catz, Amazon Web Services CEO Andy Jassy and Microsoft Chief Scientific Officer Dr. Eric Horvitz, among others. Schmidt’s investment was just the first of a handful of direct investments he would make in AI startup companies during his tenure as chairman of the AI commission.
“Venture capital firms financed, in part, by Schmidt and his private family foundation also made dozens of additional investments in AI companies during Schmidt’s tenure, giving Schmidt an economic stake in the industry even as he developed new regulations and encouraged taxpayer financing for it,” adds CNBC. “Altogether, Schmidt and entities connected to him made more than 50 investments in AI companies while he was chairman of the federal commission on AI. Information on his investments isn’t publicly available.”

“All that activity meant that, at the same time Schmidt was wielding enormous influence over the future of federal AI policy, he was also potentially positioning himself to profit personally from the most promising young AI companies.” Citing people close to Schmidt, the report says his investments were disclosed in a private filing to the U.S. government at the time and the public and news media had no access to that document.

A spokesperson for Schmidt told CNBC that he followed all rules and procedures in his tenure on the commission, “Eric has given full compliance on everything,” the spokesperson said.

259

Behind TikTok’s Boom: A Legion of Traumatized, $10-A-Day Content Moderators

“horrific” videos “are part and parcel of everyday work for TikTok moderators in Colombia.”

They told the Bureau of Investigative Journalism about widespread occupational trauma and inadequate psychological support, demanding or impossible performance targets, punitive salary deductions and extensive surveillance. Their attempts to unionize to secure better conditions have been opposed repeatedly. TikTok’s rapid growth in Latin America — it has an estimated 100 million users in the region — has led to the hiring of hundreds of moderators in Colombia to fight a never-ending battle against disturbing content. They work six days a week on day and night shifts, with some paid as little as 1.2 million pesos ($254) a month, compared to around $2,900 for content moderators based in the U.S….

The nine moderators could only speak anonymously for fear they might lose their jobs, or undermine their future employment prospects…. The TikTok moderation system described by these moderators is built on exacting performance targets. If workers do not get through a huge number of videos, or return late from a break, they can lose out on a monthly bonus worth up to a quarter of their salary. It is easy to lose out on the much-needed extra cash. Ãlvaro, a current TikTok moderator, has a target of 900 videos per day, with about 15 seconds to view each video. He works from 6am to 3pm, with two hours of break time, and his base salary is 1.2m pesos ($254) a month, only slightly higher than Colombia’s minimum salary…. He once received a disciplinary notice known internally as an “action form” for only managing to watch 700 videos in a shift, which was considered “work avoidance”. Once a worker has an action form, he says, they cannot receive a bonus that month….

Outsourcing moderation to countries in the global south like Colombia works for businesses because it is cheap, and workers are poorly protected…. For now… TikTok’s low-paid moderators will keep working to their grueling targets, sifting through some of the internet’s most nightmarish content.

259

AI-Assisted Guns Deployed By Israel To Track Targets In the West Bank

A Euronews video features footage of Sharone Aloni, Research and Development VP of Sharp Shooter, demonstrating one of the company’s devices with an automatic Fire Control System. “Inside here, you have the computer running all the artificial intelligence, computer vision algorithms, which makes this what we call a true fire control system,” Aloni says. “It’s not only just relying on static information. It actually considers the human, the soldier, which is not stable. He’s under pressure. He’s tired. Sometimes he didn’t get enough training. And also, the target is usually dynamic, and it’s moving all the time.”

The company’s web site promises the systems “significantly increase weapon accuracy.” And according to Euronews, Israel’s army “says the tech protects soldiers, who can fire more accurately at a distance.” But Omar Shakir, Human Rights Watch’s director for Israel and Palestine, counters that when he hears claims of a reduction in risks, “that’s often a one-sided kind of risk. It might minimize the risk for the occupying force or the army deploying it, but often it invariably increases the risk to affected communities.” Sophisticated weapons systems “will lack elements of human control and agency that are often the difference between life and death.” Euronews adds that “Palestinians and human rights experts say the weapons are dehumanizing, dangerous and unaccountable.”

Sharp Shooter has a response to that, according to Eurnoews: the robotic guns are not fully automated, so a soldier must always pull the trigger, with the system only firing “after algorithms assess factors like wind speed, distance and velocity.” And Michal Mor, Sharp Shooter’s CEO and founder, also describes its utility in fighting a terrorist. “Usually the terrorist will be inside a civilian environment with many people that we do not want to hurt.

“We’re enabling the soldier to look through his fire control system, to make sure that the target that he wants to hit is the legitimate target. Once he locks on the target, the system will make sure that the round will be released when he presses the trigger, only on the legitimate target, and none of the bystanders can be hit by the weapon.”

276

‘Deepfakes’ of Celebrities Have Begun Appearing in Ads, With or Without Their Permission

Celebrity deepfakes are coming to advertising. Among the recent entries: Last year, Russian telecommunications company MegaFon released a commercial in which a simulacrum of Hollywood legend Bruce Willis helps defuse a bomb. Just last week, Elon Musk seemed to star in a marketing video from real-estate investment startup reAlpha Tech. And last month a promotional video for machine-learning firm Paperspace showed talking semblances of the actors Tom Cruise and Leonardo DiCaprio. None of these celebrities ever spent a moment filming these campaigns. In the cases of Messrs. Musk, Cruise and DiCaprio, they never even agreed to endorse the companies in question. All the videos of digital simulations were created with so-called deepfake technology, which uses computer-generated renditions to make the Hollywood and business notables say and do things they never actually said or did.

Some of the ads are broad parodies, and the meshing of the digital to the analog in the best of cases might not fool an alert viewer. Even so, the growing adoption of deepfake software could eventually shape the industry in profound ways while creating new legal and ethical questions, experts said. Authorized deepfakes could allow marketers to feature huge stars in ads without requiring them to actually appear on-set or before cameras, bringing down costs and opening new creative possibilities. But unauthorized, they create a legal gray area: Celebrities could struggle to contain a proliferation of unauthorized digital reproductions of themselves and the manipulation of their brand and reputation, experts said.

286

NYPD Deploying Encryption To Block Public From Radio Scanner Broadcasts

While law enforcement has spent years fighting to make encryption less accessible for everyday people, police think they need a little more privacy. Critics worry a turn towards encryption by law enforcement could reduce transparency, hamstring the news media, and potentially jeopardize the safety of protesters looking to stay a step ahead.

According to amNY, the NYPD’s new plan would allow law enforcement officers discretion on whether or not to publicly disclose newsworthy incidents. That means the NYPD essentially would get to dictate the truth unchallenged in a number of potentially sensitive local stories. The report suggests police are floating the idea of letting members of the news media monitor certain radio transmissions through an NYPD-controlled mobile app. There’s a catch though. According to the report, the app would send radio information with a delay. Users may also have to pay a subscription fee to use the service, the paper said.

The NYPD confirmed its planning a “systems upgrade” in the coming years in an email to Gizmodo. “The NYPD is undergoing a systems upgrade that is underway and that will be complete after 2024,” a spokesperson for the Deputy Commissioner of Public Information said. “This infrastructure upgrade allows the NYPD to transmit in either an encrypted or non-encrypted format,” the NYPD said. “Some parts of the city have had the necessary equipment installed and the Department will begin testing the technology in these areas later this year. We are currently evaluating encryption best practices and will communicate new policies and procedures as we roll out this upgraded technology.” The spokesperson claimed the department intends to listen to and consider the needs of the news media during the transition process.

275

Meta’s New Text-to-Video AI Generator is Like DALL-E for Video

A team of machine learning engineers from Facebook’s parent company Meta has unveiled a new system called Make-A-Video. As the name suggests, this AI model allows users to type in a rough description of a scene, and it will generate a short video matching their text. The videos are clearly artificial, with blurred subjects and distorted animation, but still represent a significant development in the field of AI content generation.

“Generative AI research is pushing creative expression forward by giving people tools to quickly and easily create new content,” said Meta in a blog post announcing the work. “With just a few words or lines of text, Make-A-Video can bring imagination to life and create one-of-a-kind videos full of vivid colors and landscapes.” In a Facebook post, Meta CEO Mark Zuckerberg described the work as “amazing progress,” adding: “It’s much harder to generate video than photos because beyond correctly generating each pixel, the system also has to predict how they’ll change over time.”

180

Europe Braces For Mobile Network Blackouts

Russia’s decision to halt gas supplies via Europe’s key supply route in the wake of the Ukraine conflict has increased the chances of power shortages. In France, the situation is made worse by several nuclear power plants shutting down for maintenance. Telecoms industry officials say they fear a severe winter will put Europe’s telecoms infrastructure to the test, forcing companies and governments to try to mitigate the impact. Currently there are not enough back-up systems in many European countries to handle widespread power cuts, four telecoms executives said, raising the prospect of mobile phone outages.

European Union countries, including France, Sweden and Germany, are trying to ensure communications can continue even if power cuts end up exhausting back-up batteries installed on the thousands of cellular antennas spread across their territory. Europe has nearly half a million telecom towers and most of them have battery backups that last around 30 minutes to run the mobile antennas. […] Telecom gear makers Nokia and Ericsson are working with mobile operators to mitigate the impact of a power shortage. The European telecom operators must review their networks to reduce extra power usage and modernize their equipment by using more power efficient radio designs, the four telecom executives said. To save power, telecom companies are using software to optimize traffic flow, make towers “sleep” when not in use and switch off different spectrum bands. The telecom operators are also working with national governments to check if plans are in place to maintain critical services.

In Germany, Deutsche Telekom has 33,000 mobile radio sites (towers) and its mobile emergency power systems can only support a small number of them at the same time, a company spokesperson said. Deutsche Telekom will use mobile emergency power systems which mainly rely on diesel in the event of prolonged power failures, it said. France has about 62,000 mobile towers, and the industry will not be able to equip all antennas with new batteries, the FFT’s president Liza Bellulo said. Accustomed to uninterrupted power supply for decades, European countries usually do not have generators backing up power for longer durations.

278

FTC Should Probe Payroll Data Deals By Brokers Like Equifax

For decades, companies such as Equifax have acquired employee work histories and compensation data from employers to help lenders, landlords, hiring managers and other customers perform background checks of individuals. But these big databases are vulnerable to theft and error, and workers are sometimes surprised their records are included, according to privacy activists. Equifax said it follows all laws and welcomes additional voices in the industry. In the letter to the FTC, San Mateo, California-based startup Certree said that Equifax and Experian are providing financial incentives like a slice of their revenue to employers to gain exclusive access to payroll data. Equifax also has deals with payroll software vendors that help employers process paychecks. The letter describes the agreements as anticompetitive and potentially unlawful.

237

Beijing Bus Drivers Have Been Told To Wear Wristbands To Monitor Their Emotions

The move was initiated by the state-run Beijing Public Transport Holding Group, which says it is aimed at protecting public safety. But legal experts have raised privacy concerns and say the wristbands could cause bus drivers undue distress and potentially lead to discrimination. Some 1,800 wristbands were distributed to bus drivers on cross-province and highway routes on Wednesday, the official Beijing Daily reported. It is unclear how many drivers will be required to wear the devices. The report said they would be used to monitor the drivers’ vital signs and emotional state in real time to improve safety.

245

TikTok Tracks You Across the Web, Even If You Don’t Use the App

A Consumer Reports investigation finds that TikTok, one of the country’s most popular apps, is partnering with a growing number of other companies to hoover up data about people as they travel across the internet. That includes people who don’t have TikTok accounts. These companies embed tiny TikTok trackers called “pixels” in their websites. Then TikTok uses the information gathered by all those pixels to help the companies target ads at potential customers, and to measure how well their ads work. To look into TikTok’s use of online tracking, CR asked the security firm Disconnect to scan about 20,000 websites for the company’s pixels. In our list, we included the 1,000 most popular websites overall, as well as some of the biggest sites with domains ending in “.org,” “.edu,” and “.gov.” We wanted to look at those sites because they often deal with sensitive subjects. We found hundreds of organizations sharing data with TikTok.

If you go to the United Methodist Church’s main website, TikTok hears about it. Interested in joining Weight Watchers? TikTok finds that out, too. The Arizona Department of Economic Security tells TikTok when you view pages concerned with domestic violence or food assistance. Even Planned Parenthood uses the trackers, automatically notifying TikTok about every person who goes to its website, though it doesn’t share information from the pages where you can book an appointment. (None of those groups responded to requests for comment.) The number of TikTok trackers we saw was just a fraction of those we observed from Google and Meta. However, TikTok’s advertising business is exploding, and experts say the data collection will probably grow along with it.

After Disconnect researchers conducted a broad search for TikTok trackers, we asked them to take a close look at what kind of information was being shared by 15 specific websites. We focused on sites where we thought people would have a particular expectation of privacy, such as advocacy organizations and hospitals, along with retailers and other kinds of companies. Disconnect found that data being transmitted to TikTok can include your IP address, a unique ID number, what page you’re on, and what you’re clicking, typing, or searching for, depending on how the website has been set up. What does TikTok do with all that information? “Like other platforms, the data we receive from advertisers is used to improve the effectiveness of our advertising services,” says Melanie Bosselait, a TikTok spokesperson. The data “is not used to group individuals into particular interest categories for other advertisers to target.” If TikTok receives data about someone who doesn’t have a TikTok account, the company only uses that data for aggregated reports that they send to advertisers about their websites, she says. There’s no independent way for consumers or privacy researchers to verify such statements. But TikTok’s terms of service say its advertising customers aren’t allowed to send the company certain kinds of sensitive information, such as data about children, health conditions, or finances. “We continuously work with our partners to avoid inadvertent transmission of such data,” TikTok’s Bosselait says.

286

America’s Funniest Home Surveillance Network Isn’t Funny

Amazon is normalizing neighborhood panopticons by turning its doorbell videos into a TV show. Orwell wouldn’t be laughing.

When smartphones first came on the scene, their built-in cameras were limited to personal use. Then social media sites like Facebook and Instagram created a beast that millions wanted to feed, and photos became a public spectacle. The same phenomenon is happening to doorbell cameras. Initially marketed to make customers feel safer in their homes, their footage is now being uploaded for entertainment. On TikTok, the hashtag Ringdoorbell has more than 2.7 billion views.

Amazon.com Inc., which owns market-dominating Ring, has seen and grabbed a lucrative opportunity, and is contributing to the gradual erosion of our privacy in the process.

On Monday, the company premiered Ring Nation, a television show syndicated across more than 70 American cities. Hosted by the comedian Wanda Sykes and produced by Metro-Goldwyn-Mayer, which Amazon finished buying in March, the 20-minute program features videos captured on smartphones and Amazon’s Ring doorbell cameras, which the company sells for about $105.

271

US courts must stop shielding government surveillance programs from accountability

Imagine the government has searched your home without a warrant or probable cause, rifling through your files, your bedroom dresser, your diary. You sue, arguing that the public record shows it violated your fourth amendment rights. The government claims that it has a defense, but that its defense is secret. The court dismisses the case.

That’s precisely what the federal government has increasingly said it can do in cases related to national security – under the so-called “state secret privilege”. It can violate constitutional rights, and then defeat any effort at accountability by claiming that its defense is secret – without even showing its evidence to a court behind closed doors.

261

Children May Be Losing the Equivalent of One Night’s Sleep a Week From Social Media Use, Study Suggests

Children under 12 may be losing the equivalent of one night’s sleep every week due to excessive social media use, a new study suggests. Insider reports:
Almost 70% of the 60 children under 12 surveyed by De Montfort University in Leicester, UK, said they used social media for four hours a day or more. Two thirds said they used social media apps in the two hours before going to bed. The study also found that 12.5% of the children surveyed were waking up in the night to check their notifications.

Psychology lecturer John Shaw, who headed up the study, said children were supposed to sleep for between nine to 11 hours a night, per NHS guidelines, but those surveyed reported sleeping an average of 8.7 hours nightly. He said: “The fear of missing out, which is driven by social media, is directly affecting their sleep. They want to know what their friends are doing, and if you’re not online when something is happening, it means you’re not taking part in it. “And it can be a feedback loop. If you are anxious you are more likely to be on social media, you are more anxious as a result of that. And you’re looking at something, that’s stimulating and delaying sleep.”
“TikTok had the most engagement from the children, with 90% of those surveyed saying they used the app,” notes Insider. “Snapchat was used by 84%, while just over half those surveyed said they used Instagram.”

264

Sleepless Nights Make People More Selfish and Asocial, Study Finds

A study found losing just one hour of rest could kill people’s desire to help others, even relatives and close friends. The team noted that a bad night appeared to dampen activity in the part of the brain that encouraged social behavior. “We discovered that sleep loss acts as a trigger of asocial behavior, reducing the innate desire of humans to help one another,” said Prof Matthew Walker, co-author of the study at the University of California, Berkeley. “In a way, the less sleep you get, the less social and more selfish you become.” Writing in the PLoS Biology journal, the team suggest that a chronic sleep deficit could harm social bonds and compromise the altruistic instincts that shape society. “Considering the essentiality of humans helping in maintaining cooperative, civilized societies, together with the robust erosion of sleep time over the last 50 years, the ramifications of these discoveries are highly relevant to how we shape the societies we wish to live in,” said Walker.

The team examined the willingness of 160 participants to help others with a “self-reported altruism questionnaire”, which they completed after a night’s sleep. Participants responded to different social scenarios on a scale from “I would stop to help” to “I would ignore them.” In one experiment involving 24 participants, the researchers compared answers from the same person after a restful night and after 24 hours without sleep. The results revealed a 78% decline in self-reported eagerness to help others when tired. The team then performed brain scans of those participants and found a short night was associated with reduced activity in the social cognitive brain network, a region involved in social behavior. Participants were as reluctant to assist friends and family as strangers, the researchers said. “A lack of sleep impaired the drive to help others regardless of whether they were asked to help strangers or close relatives. That is, sleep loss triggers asocial, anti-helping behavior of a broad and indiscriminate impact,” said Walker.

To determine whether altruism takes a hit in the real world, the team then tracked more than 3m charitable donations in the US before and after clocks were shifted an hour forward to daylight saving time, suggesting a shorter period of sleep. They found a 10% drop in donations after the transition. “Our study adds to a growing body of evidence demonstrating that inadequate sleep not only harms the mental and physical wellbeing of an individual but also compromises the bonds between individuals, and even the altruistic sentiment of an entire nation,” said Walker. Luckily, we can catch up on sleep. Walker said: “The positive note emerging from all our studies is that once sleep is adequate and sufficient the desire to help others is restored. But it’s important to note that it is not only sleep duration that is relevant to helping. We found that the factor that was most relevant was actually sleep quality, above and beyond sleep quantity,” he added.

267

YouTuber Trains AI On 4Chan’s Most Hateful Board

YouTuber Yannic Kilcher trained an AI language model using three years of content from 4chan’s Politically Incorrect (/pol/) board, a place infamous for its racism and other forms of bigotry. After implementing the model in ten bots, Kilcher set the AI loose on the board — and it unsurprisingly created a wave of hate. In the space of 24 hours, the bots wrote 15,000 posts that frequently included or interacted with racist content. They represented more than 10 percent of posts on /pol/ that day, Kilcher claimed.

Nicknamed GPT-4chan (after OpenAI’s GPT-3), the model learned to not only pick up the words used in /pol/ posts, but an overall tone that Kilcher said blended “offensiveness, nihilism, trolling and deep distrust.” The video creator took care to dodge 4chan’s defenses against proxies and VPNs, and even used a VPN to make it look like the bot posts originated from the Seychelles. The AI made a few mistakes, such as blank posts, but was convincing enough that it took roughly two days for many users to realize something was amiss. Many forum members only noticed one of the bots, according to Kilcher, and the model created enough wariness that people accused each other of being bots days after Kilcher deactivated them.

“It’s a reminder that trained AI is only as good as its source material,” concludes the report.

247

Spyware Scandals Are Ripping Through Europe

The ripple effects of the scandal are reaching the heart of the European Union. Over the past 13 months, it has been revealed that spyware had targeted opposition leaders, journalists, lawyers and activists in France, Spain, Hungary, Poland and even staff within the European Commission, the EU’s cabinet-style government, between 2019 and 2021. The bloc has already set up an inquiry into its own use of spyware, but even as the 38-person committee works toward producing a report for early 2023, the number of new scandals is quickly mounting up. What sets the scandal in Greece apart is the company behind the spyware that was used. Until then the surveillance software in every EU scandal could be traced back to one company, the notorious NSO Group. Yet the spyware stalking Koukakis’ phone was made by Cytrox, a company founded in the small European nation of North Macedonia and acquired in 2017 by Tal Dilian — an entrepreneur who achieved notoriety for driving a high-tech surveillance van around the island of Cyprus and showing a Forbes journalist how it could hack into passing people’s phones.

In that interview, Dilian said he had acquired Cytrox and absorbed the company into his intelligence company Intellexa, which is now thought to now be based in Greece. The arrival of Cytrox into Europe’s ongoing scandal shows the problem is bigger than just the NSO Group. The bloc has a thriving spyware industry of its own. As the NSO Group struggles with intense scrutiny and being blacklisted by the US, its less well-known European rivals are jostling to take its clients, researchers say. Over the past two months, Cytrox is not the only local company to generate headlines for hacking devices within the bloc. In June, Google discovered the Italian spyware vendor RCS Lab was targeting smartphones in Italy and Kazakhstan. Alberto Nobili, RCS’ managing director, told WIRED that the company condemns the misuse of its products but declined to comment on whether the cases cited by Google were examples of misuse. “RCS personnel are not exposed, nor participate in any activities conducted by the relevant customers,” he says. More recently, in July, spyware made by Austria’s DSIRF was detected by Microsoft hacking into law firms, banks, and consultancies in Austria, the UK, and Panama.

255