Archives 2023

Robot Crushes Man To Death After Misidentifying Him As a Box

A robot in a South Korea distribution center crushed a man to death after the machine apparently failed to differentiate him from the boxes of produce it was handling. The Guardian reports:
The man, a robotics company worker in his 40s, was inspecting the robot’s sensor operations at a distribution centre for agricultural produce in South Gyeongsang province. The industrial robot, which was lifting boxes filled with bell peppers and placing them on a pallet, appears to have malfunctioned and identified the man as a box, Yonhap reported, citing the police. The robotic arm pushed the man’s upper body down against the conveyor belt, crushing his face and chest, according to Yonhap. He was transferred to the hospital but died later, the report said.
The BBC notes that the man was “checking the robot’s sensor operations ahead of its test run […] scheduled for November 8.” It was originally planned for November 6th, “but was pushed back by two days due to problems with the robot’s sensor,” the report adds.

Court Rules Automakers Can Record and Intercept Owner Text Messages

A federal judge on Tuesday refused to bring back a class action lawsuit alleging four auto manufacturers had violated Washington state’s privacy laws by using vehicles’ on-board infotainment systems to record and intercept customers’ private text messages and mobile phone call logs. The Seattle-based appellate judge ruled that the practice does not meet the threshold for an illegal privacy violation under state law, handing a big win to automakers Honda, Toyota, Volkswagen and General Motors, which are defendants in five related class action suits focused on the issue. One of those cases, against Ford, had been dismissed on appeal previously.

The plaintiffs in the four live cases had appealed a prior judge’s dismissal. But the appellate judge ruled Tuesday that the interception and recording of mobile phone activity did not meet the Washington Privacy Act’s standard that a plaintiff must prove that “his or her business, his or her person, or his or her reputation” has been threatened. In an example of the issues at stake, plaintiffs in one of the five cases filed suit against Honda in 2021, arguing that beginning in at least 2014 infotainment systems in the company’s vehicles began downloading and storing a copy of all text messages on smartphones when they were connected to the system. An Annapolis, Maryland-based company, Berla Corporation, provides the technology to some car manufacturers but does not offer it to the general public, the lawsuit said. Once messages are downloaded, Berla’s software makes it impossible for vehicle owners to access their communications and call logs but does provide law enforcement with access, the lawsuit said.

Generative AI Already Taking White Collar Jobs and Wages in Online Freelancing World

In an ingenious study published this summer, US researchers showed that within a few months of the launch of ChatGPT, copywriters and graphic designers on major online freelancing platforms saw a significant drop in the number of jobs they got, and even steeper declines in earnings. This suggested not only that generative AI was taking their work, but also that it devalues the work they do still carry out.

Most strikingly, the study found that freelancers who previously had the highest earnings and completed the most jobs were no less likely to see their employment and earnings decline than other workers. If anything, they had worse outcomes. In other words, being more skilled was no shield against loss of work or earnings. But the online freelancing market covers a very particular form of white-collar work and of labour market. What about looking higher up the ranks of the knowledge worker class? For that, we can turn to a recent, fascinating Harvard Business School study, which monitored the impact of giving GPT-4, OpenAI’s latest and most advanced offering, to employees at Boston Consulting Group.

Is India Setting a ‘Global Standard’ for Online Censorship of Social Media?

With 1.4 billion people, India is the second most-populous country in the world.

But a new article in the Washington Post alleges that India has “set a global standard for online censorship.” For years, a committee of executives from U.S. technology companies and Indian officials convened every two weeks in a government office to negotiate what could — and could not — be said on Twitter, Facebook and YouTube. At the “69A meetings,” as the secretive gatherings were informally called, officials from India’s information, technology, security and intelligence agencies presented social media posts they wanted removed, citing threats to India’s sovereignty and national security, executives and officials who were present recalled. The tech representatives sometimes pushed back in the name of free speech…

But two years ago, these interactions took a fateful turn. Where officials had once asked for a handful of tweets to be removed at each meeting, they now insisted that entire accounts be taken down, and numbers were running in the hundreds. Executives who refused the government’s demands could now be jailed, their companies expelled from the Indian market. New regulations had been adopted that year to hold tech employees in India criminally liable for failing to comply with takedown requests, a provision that executives referred to as a “hostage provision.” After authorities dispatched anti-terrorism police to Twitter’s New Delhi office, Twitter whisked its top India executive out of the country, fearing his arrest, former company employees recounted.

Indian officials say they have accomplished something long overdue: strengthening national laws to bring disobedient foreign companies to heel… Digital and human rights advocates warn that India has perfected the use of regulations to stifle online dissent and already inspired governments in countries as varied as Nigeria and Myanmar to craft similar legal frameworks, at times with near-identical language. India’s success in taming internet companies has set off “regulatory contagion” across the world, according to Prateek Waghre, a policy director at India’s Internet Freedom Foundation…

Despite the huge size of China’s market, companies like Twitter and Facebook were forced to steer clear of the country because Beijing’s rules would have required them to spy on users. That left India as the largest potential growth market. Silicon Valley companies were already committed to doing business in India before the government began to tighten its regulations, and today say they have little choice but to obey if they want to remain there.
The Post spoke to Rajeev Chandrasekhar, the deputy technology minister in the BJP government who oversees many of the new regulations, who argued “The shift was really simple: We’ve defined the laws, defined the rules, and we have said there is zero tolerance to any noncompliance with the Indian law…

“You don’t like the law? Don’t operate in India,” Chandrasekhar added. “There is very little wiggle room.”

Will ‘News Influencers’ Replace Traditional Media?

The Washington Post looks at the “millions of independent creators reshaping how people get their news, especially the youngest viewers.” News consumption hit a tipping point around the globe during the early days of the coronavirus pandemic, with more people turning to social media platforms such as TikTok, YouTube and Instagram than to websites maintained by traditional news outlets, according to the latest Digital News Report by the Reuters Institute for the Study of Journalism. One in 5 adults under 24 use TikTok as a source for news, the report said, up five percentage points from last year. According to Britain’s Office of Communications, young adults in the United Kingdom now spend more time watching TikTok than broadcast television. This shift has been driven in part by a desire for “more accessible, informal, and entertaining news formats, often delivered by influencers rather than journalists,” the Reuters Institute report says, adding that consumers are looking for news that “feels more relevant….”

While a few national publications such as the New York Times and The Washington Post have seen their digital audiences grow, allowing them to reach hundreds of thousands more readers than they did a decade ago, the economics of journalism have shifted. Well-known news outlets have seen a decline in the amount of traffic flowing to them from social media sites, and some of the money that advertisers previously might have spent with them is now flowing to creators. Even some outlets that began life on the internet have struggled, with BuzzFeed News shuttering in April, Vice entering into bankruptcy and Gawker shutting down for a second time in February. The trend is likely to continue. “There are no reasonable grounds for expecting that those born in the 2000s will suddenly come to prefer old-fashioned websites, let alone broadcast and print, simply because they grow older,” Reuters Institute Director Rasmus Kleis Nielsen said in the report, which is based on an online survey of roughly 94,000 adults in 46 national markets, including the United States…

While many online news creators are, like Al-Khatahtbeh, trained journalists collecting new information, others are aggregators and partisan commentators sometimes masquerading as journalists. The transformation has made the public sphere much more “chaotic and contradictory,” said Jay Rosen, an associate professor of journalism at New York University and author of the PressThink blog, adding that it has never been easier to be both informed and misinformed about world events. “The internet makes possible much more content, and reaching all kinds of people,” Rosen said. “But it also makes disinformation spread.”
The article notes that “some content creators don’t follow the same ethical guidelines that are guideposts in more traditional newsrooms, especially creators who seek to build audiences based on outrage.”

The article also points out that “The ramifications for society are still coming into focus.”

USA Schools Are Normalising Intrusive Surveillance

As the authors detail, among the technologies are surveillance cameras. These are often linked to software for facial recognition, access control, behavior analysis, and weapon detection. That is, cameras scan student faces and then algorithms identify them, allow or deny them entry based on that ID, decide if their activities are threatening, and determine if objects they carry may be dangerous or forbidden.

“False hits, such as mistaking a broomstick, three-ring binder, or a Google Chromebook laptop for a gun or other type of weapon, could result in an armed police response to a school,” cautions the report.

That’s not a random assortment of harmless-until-misidentified items; a footnoted 2022 Charlotte Observer piece points out such objects were tagged as weapons by scanners in the Charlotte-Mecklenburg Schools. “A how-to video posted earlier this year by administrators at Butler High School instructs students to remove certain belongings from their backpacks — and walk through the scanner holding their laptops above their heads — to avoid setting off a false alarm,” it adds.

Huh. What happens if behavior analysis algorithms decide that brandished laptops are threatening?

Also called out is software that monitors social media, students’ communications, and web-surfing habits. Audio monitors that are supposed to detect gunshots—but can be triggered by slammed doors (as at Greenwood High School in Arkansas earlier this year)—also feature in many schools.

Of students aged 14–18 surveyed by the ACLU, 62 percent saw video cameras in their schools (the U.S. Department of Education says cameras are used by 91 percent of public schools), and 49 percent reported monitoring software. Understandably, this affects their behavior. Thirty-two percent say, “I always feel like I’m being watched,” and 26 percent fret over what their “school and the companies they contract with do with the data.”

“Research demonstrates the damaging effect of surveillance on children’s ability to develop in healthy ways,” Fedders added. “Pervasive surveillance can create a climate in which adults are seen as overestimating and overreacting to risk. Children, in turn, cannot develop the ability to evaluate and manage risk themselves in order to function effectively.”

Notably, school surveillance normalizes the idea that constant monitoring is good and necessary for preserving safety.

New York’s Airbnb Ban Is Bolstering a Rental Black Market

As few as 2 percent of New York City’s previous 22,000 short-term rentals on Airbnb have been registered with the city since a new law banning most listings came into effect in early September. But many illegal short-term rental listings are now being advertised on social media and lesser known platforms, with some still seemingly being listed on Airbnb itself. The number of short-term listings on Airbnb has fallen by more than 80 percent, from 22,434 in August to just 3,227 by October 1, according to Inside Airbnb, a watchdog group that tracks the booking platform. But just 417 properties have been registered with the city, suggesting that very few of the city’s short-term rentals have been able to get permission to continue operating.

The crackdown in New York has created a “black market” for short-term rentals in the city, claims Lisa Grossman, a spokesperson for Restore Homeowner Autonomy and Rights (RHOAR), a local group that opposed the law. Grossman says she’s seen the short-term rental market pick up steam on places like Facebook since the ban. “People are going underground,” she says. New York’s crackdown on short-term rentals has dramatically reshaped the vacation rental market in the city. People are using sites like Craigslist, Facebook, Houfy, and others, where they can search for guests or places to book without the checks and balances of booking platforms like Airbnb. Hotel prices are expected to rise with more demand.
After the rule change, Airbnb CEO Brian Chesky said the company would be shifting attention away from New York, which was once its biggest market.

YouTube Passes Netflix As Top Video Source For Teens

Teens polled by the bank said they spent 29.1% of their daily video consumption time on Google-owned YouTube, beating out Netflix for the first time at 28.7%. Time on YouTube rose since the spring, adding nearly a percentage point, while Netflix fell more than two percentage points. The data point shows that the streaming business is getting more competitive, and highlights YouTube’s strong position as a free provider of online video, especially among young people.

Half a Billion Cheap Electrical Items Go To UK Landfills in a Year, Research Finds

The not-for-profit organisation Material Focus, which conducted the research, said the scale of the issue was huge and they wanted to encourage more recycling. More than half a billion cheaply priced electronic goods were bought in the UK in the past year alone — 16 per second. Material Focus findings showed that of these items, 471m were thrown away. This included 260m disposable vapes, 26m cables, 29m LED, solar and decorative lights, 9.8m USB sticks, and 4.8m miniature fans.

Scott Butler, executive director at Material Focus, described it as “fast tech.” He said: “People should think carefully about buying some of the more frivolous … items in the first place.” He said the items people bought were often “cheap and small,” and that consumers may not realise they contain valuable materials that could be salvaged if recycled. Small electricals can contain precious materials including copper, lithium and stainless steel. These components can be recycled and used in wind turbines, medical devices and electric vehicles. Material Focus said that while people were used to the idea of recycling larger electrical items such as fridges, lots of smaller devices were left unused in houses.

People Send 20 Billion Pounds of “Invisible” E-Waste To Landfills Each Year

One e-toy for every person on Earth — that’s the staggering amount of electric trains, drones, talking dolls, R/C cars, and other children’s gadgets tossed into landfills every year. Some of what most consumers consider to be e-waste — like electronics such as computers, smartphones, TVs, and speaker systems — are usual suspects. Others, like power tools, vapes, LED accessories, USB cables, anything involving rechargeable lithium batteries and countless other similar, “nontraditional” e-waste materials, are less obviously in need of special disposal. In all, people across the world throw out roughly 9 billion kilograms (19.8 billion pounds) of e-waste commonly not recognized as such by consumers.

This “invisible e-waste” is the focal point of the sixth annual International E-Waste Day on October 14, organized by Waste Electrical and Electronic Equipment (WEEE) Forum. In anticipation of the event, the organization recently commissioned the United Nations Institute for Training and Research (UNITAR) to delve into just how much unconventional e-waste is discarded every year — and global population numbers are just some of the ways to visualize the issue.

According to UNITAR’s findings, for example, the total weight of all e-cig vapes thrown away every year roughly equals 6 Eiffel Towers. Meanwhile, the total weight of all invisible e-waste tallies up to “almost half a million 40 [metric ton] trucks,” enough to create a bumper-to-bumper traffic jam stretching approximately 3,504 miles — the distance between Rome and Nairobi. From a purely economic standpoint, nearly $10 billion in essential raw materials is literally thrown into the garbage every year.

UK Opposition Leader Targeted By AI-Generated Fake Audio Smear

An audio clip posted to social media on Sunday, purporting to show Britain’s opposition leader Keir Starmer verbally abusing his staff, has been debunked as being AI-generated by private-sector and British government analysis. The audio of Keir Starmer was posted on X (formerly Twitter) by a pseudonymous account on Sunday morning, the opening day of the Labour Party conference in Liverpool. The account asserted that the clip, which has now been viewed more than 1.4 million times, was genuine, and that its authenticity had been corroborated by a sound engineer.

Ben Colman, the co-founder and CEO of Reality Defender — a deepfake detection business — disputed this assessment when contacted by Recorded Future News: “We found the audio to be 75% likely manipulated based on a copy of a copy that’s been going around (a transcoding). As we don’t have the ground truth, we give a probability score (in this case 75%) and never a definitive score (‘this is fake’ or ‘this is real’), leaning much more towards ‘this is likely manipulated’ than not,” said Colman. “It is also our opinion that the creator of this file added background noise to attempt evasion of detection, but our system accounts for this as well,” he said.

I’m a Luddite – and Why You Should Be One Too

Los Angeles Times technology columnist Brian Merchant has written a book about the 1811 Luddite rebellion against industrial technology, decrying “entrepreneurs and industrialists pushing for new, dubiously legal, highly automated and labor-saving modes of production.”

In a new piece he applauds the spirit of the Luddites. “The kind of visionaries we need now are those who see precisely how certain technologies are causing harm and who resist them when necessary.”
The parallels to the modern day are everywhere. In the 1800s, entrepreneurs used technology to justify imposing a new mode of work: the factory system. In the 2000s, CEOs used technology to justify imposing a new mode of work: algorithmically organized gig labor, in which pay is lower and protections scarce. In the 1800s, hosiers and factory owners used automation less to overtly replace workers than to deskill them and drive down their wages. Digital media bosses, call center operators and studio executives are using AI in much the same way. Then, as now, the titans used technology both as a new mode of production and as an idea that allowed them to ignore long-standing laws and regulations. In the 1800s, this might have been a factory boss arguing that his mill exempted him from a statute governing apprentice labor. Today, it’s a ride-hailing app that claims to be a software company so it doesn’t have to play by the rules of a cab firm.

Then, as now, leaders dazzled by unregulated technologies ignored their potential downsides. Then, it might have been state-of-the-art water frames that could produce an incredible volume of yarn — but needed hundreds of vulnerable child laborers to operate. Today, it’s a cellphone or a same-day delivery, made possible by thousands of human laborers toiling in often punishing conditions.

Then, as now, workers and critics sounded the alarm…

Resistance is gathering again, too. Amazon workers are joining union drives despite intense opposition. Actors and screenwriters are striking and artists and illustrators have called for a ban of generative AI in editorial outlets. Organizing, illegal in the Luddites’ time, has historically proved the best bulwark against automation. But governments must also step up. They must offer robust protections and social services for those in precarious positions. They must enforce antitrust laws. Crucially, they must develop regulations to rein in the antidemocratic model of technological development wherein a handful of billionaires and venture capital firms determine the shape of the future — and who wins and loses in it.

The clothworkers of the 1800s had the right idea: They believed everyone should share in the bounty of the amazing technologies their work makes possible.

That’s why I’m a Luddite — and why you should be one, too.

So whatever happened to the Luddites? The article reminds readers that the factory system “took root,” and “brought prosperity for some, but it created an immiserated working class.

“The 200 years since have seen breathtaking technological innovation — but much less social innovation in how the benefits are shared.”

School surveillance tech does more harm than good, ACLU report finds

An ACLU report has found that despite claims from companies, surveillance technology in US schools does not improve student safety and constant surveillance can, in fact, cause a number of harms to students including making students less likely to report dangerous behavior.

Schools typically use technologies such as cameras, facial recognition software and communication monitoring and filtering technology, which have been marketed by education technology surveillance companies as intervention tools against school shootings, suicides and bullying. In 2021, US schools and colleges spent $3.1bn on these products and this number is expected to grow by 8% every year, according to the report.

But the ACLU’s report concludes that there is little to no independent research or evidence that supports that this technology works.

Elon Musk’s Neuralink Torturing Animals

The tan macaque with the hairless pink face could do little more than sit and shiver as her brain began to swell. The California National Primate Center staff observing her via livestream knew the signs. Whatever had been done had left her with a “severe neurological defect,” and it was time to put the monkey to sleep. But the client protested; the Neuralink scientist whose experiment left the 7-year-old monkey’s brain mutilated wanted to wait another day. And so they did.

As the attending staff sat back and observed, the monkey seized and vomited. Her pupils reacted less and less to the light. Her right leg went limp, and she could no longer support the weight of her 15-pound body without gripping the bars of her cage. One attendant moved a heat lamp beside her to try to stop her shaking. Sometimes she would wake and scratch at her throat, retching and gasping for air, before collapsing again, exhausted.

An autopsy would later reveal that the mounting pressure inside her skull had deformed and ruptured her brain. A toxic adhesive around the Neuralink implant bolted to her skull had leaked internally. The resulting inflammation had caused painful pressure on a part of the brain producing cerebrospinal fluid, the slick, translucent substance in which the brain sits normally buoyant. The hind quarter of her brain visibly poked out of the base of her skull.

On September 13, 2018, she was euthanized, records obtained by WIRED show. This episode, regulators later acknowledged, was a violation of the US Animal Welfare Act; a federal law meant to set minimally acceptable standards for the handling, housing, and feeding of research animals. There would be no consequences, however. Between 2016 and 2021, the United States Department of Agriculture (USDA) enforced the humane treatment of animals through what it called “teachable moments.” Because the center—home to a colony of nearly 5,000 primates run by the University of California–Davis—had proactively reported the violation, it could not be legally cited.

And neither could Neuralink. “If you want to split hairs,” a former employee tells WIRED, “the implant itself did not cause death. We sacrificed her to end her suffering.” The employee, who signed a confidentiality agreement, asked not to be identified.

Microsoft Needs So Much Power to Train AI That It’s Considering Small Nuclear Reactors

Training large language models is an incredibly power-intensive process that has an immense carbon footprint. Keeping data centers running requires a ludicrous amount of electricity that could generate substantial amounts of greenhouse emissions — depending, of course, on the energy’s source. Now, the Verge reports, Microsoft is betting so big on AI that its pushing forward with a plan to power them using nuclear reactors. Yes, you read that right; a recent job listing suggests the company is planning to grow its energy infrastructure with the use of small modular reactors (SMR)…

But before Microsoft can start relying on nuclear power to train its AIs, it’ll have plenty of other hurdles to overcome. For one, it’ll have to source a working SMR design. Then, it’ll have to figure out how to get its hands on a highly enriched uranium fuel that these small reactors typically require, as The Verge points out. Finally, it’ll need to figure out a way to store all of that nuclear waste long term…

Other than nuclear fission, Microsoft is also investing in nuclear fusion, a far more ambitious endeavor, given the many decades of research that have yet to lead to a practical power system. Nevertheless, the company signed a power purchase agreement with Helion, a fusion startup founded by OpenAI CEO Sam Altman earlier this year, with the hopes of buying electricity from it as soon as 2028.

Signal President Says AI is Fundamentally ‘a Surveillance Technology’

Why is it that so many companies that rely on monetizing the data of their users seem to be extremely hot on AI? If you ask Signal president Meredith Whittaker (and I did), she’ll tell you it’s simply because “AI is a surveillance technology.” Onstage at TechCrunch Disrupt 2023, Whittaker explained her perspective that AI is largely inseparable from the big data and targeting industry perpetuated by the likes of Google and Meta, as well as less consumer-focused but equally prominent enterprise and defense companies. “It requires the surveillance business model; it’s an exacerbation of what we’ve seen since the late ’90s and the development of surveillance advertising. AI is a way, I think, to entrench and expand the surveillance business model,” she said.

“The Venn diagram is a circle.” “And the use of AI is also surveillant, right?” she continued. “You know, you walk past a facial recognition camera that’s instrumented with pseudo-scientific emotion recognition, and it produces data about you, right or wrong, that says ‘you are happy, you are sad, you have a bad character, you’re a liar, whatever.’ These are ultimately surveillance systems that are being marketed to those who have power over us generally: our employers, governments, border control, etc., to make determinations and predictions that will shape our access to resources and opportunities.”

Transportation AI ‘Uber Was Supposed to Help Traffic. It Didn’t. Robotaxis Will Be Even Worse.’

Saturday the San Francisco Chronicle published a joint opinion piece from MIT professor Carlo Ratti (who directs an MIT digital lab exploring the collection of digital data about urban life) and John Rossant (founder of the collaborative data-sharing platform CoMotion).

Together they penned a warning about a future filled with robotaxis. “Their convenience could seduce us into vastly overusing our cars. The result? An artificial-intelligence-powered nightmare of traffic, technically perfect but awful for our cities.”
Why do we believe this? Because it has already come to pass with ride-sharing. In the 2010s, the Senseable City Lab at the Massachusetts Institute of Technology, where one of us serves as the director, was at the forefront of using Big Data to study how ride-hailing and ride-sharing could make our streets cleaner and more efficient. The findings appeared to be astonishing: With minimal delays to passengers, we could match riders and reduce the size of New York City taxi fleets by 40%. More people could get around in fewer cars for less money. We could reduce car ownership, and free up curbs and parking lots for new uses. This utopian vision was not only compelling but within reach.

After publishing our results, we started the first collaboration between MIT and Uber to research a then-new product: Uber Pool (now rebranded UberX Share), a service that allows riders to share cars when heading to similar destinations for a lower cost. Alas, there is no such thing as a free lunch. Our research was technically right, but we had not taken into account changes in human behavior. Cars are more convenient and comfortable than walking, buses and subways — and that is why they are so popular. Make them even cheaper through ride-sharing and people are coaxed away from those other forms of transit. This dynamic became clear in the data a few years later: On average, ride-hailing trips generated far more traffic and 69% more carbon dioxide than the trips they displaced. We were proud of our contribution to ride-sharing but dismayed to see the results of a 2018 study that found that Uber Pool was so cheap it increased overall city travel: For every mile of personal driving it removed, it added 2.6 miles of people who otherwise would have taken another mode of transportation.

As robotaxis are on the cusp of proliferating across the world, we are about to repeat the same mistake, but at a far greater scale… [W]e cannot let a shiny new piece of technology drive us into an epic traffic jam of our own making. The best way to make urban mobility accessible, efficient and green is not about new technologies — neither self-driving cars nor electric ones — but old ones. Buses, subways, bikes and our own two feet are cleaner, cheaper and more efficient than anything Silicon Valley has dreamt up… Autonomous technology could, for example, allow cities to offer more buses, shuttles and other forms of public transit around the clock. That’s because the availability of on-demand AVs could assure “last-mile” connections between homes and transit stops. It could also be a godsend for older people and those with disabilities. However, any scale-up of AVs should be counterbalanced with investments in mass transit and improvements in walkability.

Above all, we must put in place smart regulatory and tax regimes that allow all sustainable mobility modes — including autonomous services — to scale safely and intelligently. They should include, for example, congestion fees to discourage overuse of individual vehicles.

Almost everyone in Europe is breathing toxic air

Analysis of data gathered using cutting-edge methodology — including detailed satellite images and measurements from more than 1,400 ground monitoring stations — reveals a dire picture of dirty air, with 98% of people living in areas with highly damaging fine particulate pollution that exceed World Health Organization guidelines. Almost two-thirds live in areas where air quality is more than double the WHO’s guidelines.

The worst hit country in Europe is North Macedonia. Almost two-thirds of people across the country live in areas with more than four times the WHO guidelines for PM2.5, while four areas were found to have air pollution almost six times the figure, including in its capital, Skopje. Eastern Europe is significantly worse than western Europe, apart from Italy, where more than a third of those living in the Po valley and surrounding areas in the north of the country breath air that is four times the WHO figure for the most dangerous airborne particulates.

Secrecy undermines trust in Google antitrust trial

Before a single witness could utter a word of testimony in the Google antitrust case on Tuesday, the public and the press were temporarily barred from the courtroom. It’s just another step in a long list of anti-transparency measures styming access to the case: documents and testimony have been repeatedly sealed; exhibits used in open court have been removed from the internet; and only those who can actually make it to the courtroom are permitted to listen to the testimony (when they’re allowed in at all, that is).

Despite these restrictions, reporters and courtwatchers have been doing their best to inform their audiences about the trial. But if the federal judge presiding over the case, Amit Mehta, doesn’t act soon to stop this tsunami of secrecy, people may be left mostly in the dark about the biggest antitrust lawsuit of the 21st century.

Behind this anti-transparency push are Google and other big tech companies arguing that letting people observe the case fully could reveal trade secrets or otherwise embarrass them by generating “clickbait.” There is some precedent for closing parts of trials or redacting court documents to avoid disclosing trade secrets. But not to save corporations from embarrassment.

Internet-Connected Cars Fail Privacy and Security Tests

Mozilla found brands including BMW, Ford, Toyota, Tesla, and Subaru collect data about drivers including race, facial expressions, weight, health information, and where you drive. Some of the cars tested collected data you wouldn’t expect your car to know about, including details about sexual activity, race, and immigration status, according to Mozilla. […] The worst offender was Nissan, Mozilla said. The carmaker’s privacy policy suggests the manufacturer collects information including sexual activity, health diagnosis data, and genetic data, though there’s no details about how exactly that data is gathered. Nissan reserves the right to share and sell “preferences, characteristics, psychological trends, predispositions, behavior, attitudes, intelligence, abilities, and aptitudes” to data brokers, law enforcement, and other third parties.

Other brands didn’t fare much better. Volkswagen, for example, collects your driving behaviors such as your seatbelt and braking habits and pairs that with details such as age and gender for targeted advertising. Kia’s privacy policy reserves the right to monitor your “sex life,” and Mercedes-Benz ships cars with TikTok pre-installed on the infotainment system, an app that has its own thicket of privacy problems. The privacy and security problems extend beyond the nature of the data car companies siphon off about you. Mozilla said it was unable to determine whether the brands encrypt any of the data they collect, and only Mercedes-Benz responded to the organization’s questions.

Mozilla also found that many car brands engage in “privacy washing,” or presenting consumers with information that suggests they don’t have to worry about privacy issues when the exact opposite is true. Many leading manufacturers are signatories to the Alliance for Automotive Innovation’s “Consumer Privacy Protection Principles (PDF).” According to Mozilla, these are a non-binding set of vague promises organized by the car manufacturers themselves. Questions around consent are essentially a joke as well. Subaru, for example, says that by being a passenger in the car, you are considered a “user” who has given the company consent to harvest information about you. Mozilla said a number of car brands say it’s the drivers responsibility to let passengers know about their car’s privacy policies — as if the privacy policies are comprehensible to drivers in the first place. Toyota, for example, has a constellation of 12 different privacy policies for your reading pleasure.