Archives 29 September 2023

Signal President Says AI is Fundamentally ‘a Surveillance Technology’

Why is it that so many companies that rely on monetizing the data of their users seem to be extremely hot on AI? If you ask Signal president Meredith Whittaker (and I did), she’ll tell you it’s simply because “AI is a surveillance technology.” Onstage at TechCrunch Disrupt 2023, Whittaker explained her perspective that AI is largely inseparable from the big data and targeting industry perpetuated by the likes of Google and Meta, as well as less consumer-focused but equally prominent enterprise and defense companies. “It requires the surveillance business model; it’s an exacerbation of what we’ve seen since the late ’90s and the development of surveillance advertising. AI is a way, I think, to entrench and expand the surveillance business model,” she said.

“The Venn diagram is a circle.” “And the use of AI is also surveillant, right?” she continued. “You know, you walk past a facial recognition camera that’s instrumented with pseudo-scientific emotion recognition, and it produces data about you, right or wrong, that says ‘you are happy, you are sad, you have a bad character, you’re a liar, whatever.’ These are ultimately surveillance systems that are being marketed to those who have power over us generally: our employers, governments, border control, etc., to make determinations and predictions that will shape our access to resources and opportunities.”

Transportation AI ‘Uber Was Supposed to Help Traffic. It Didn’t. Robotaxis Will Be Even Worse.’

Saturday the San Francisco Chronicle published a joint opinion piece from MIT professor Carlo Ratti (who directs an MIT digital lab exploring the collection of digital data about urban life) and John Rossant (founder of the collaborative data-sharing platform CoMotion).

Together they penned a warning about a future filled with robotaxis. “Their convenience could seduce us into vastly overusing our cars. The result? An artificial-intelligence-powered nightmare of traffic, technically perfect but awful for our cities.”
Why do we believe this? Because it has already come to pass with ride-sharing. In the 2010s, the Senseable City Lab at the Massachusetts Institute of Technology, where one of us serves as the director, was at the forefront of using Big Data to study how ride-hailing and ride-sharing could make our streets cleaner and more efficient. The findings appeared to be astonishing: With minimal delays to passengers, we could match riders and reduce the size of New York City taxi fleets by 40%. More people could get around in fewer cars for less money. We could reduce car ownership, and free up curbs and parking lots for new uses. This utopian vision was not only compelling but within reach.

After publishing our results, we started the first collaboration between MIT and Uber to research a then-new product: Uber Pool (now rebranded UberX Share), a service that allows riders to share cars when heading to similar destinations for a lower cost. Alas, there is no such thing as a free lunch. Our research was technically right, but we had not taken into account changes in human behavior. Cars are more convenient and comfortable than walking, buses and subways — and that is why they are so popular. Make them even cheaper through ride-sharing and people are coaxed away from those other forms of transit. This dynamic became clear in the data a few years later: On average, ride-hailing trips generated far more traffic and 69% more carbon dioxide than the trips they displaced. We were proud of our contribution to ride-sharing but dismayed to see the results of a 2018 study that found that Uber Pool was so cheap it increased overall city travel: For every mile of personal driving it removed, it added 2.6 miles of people who otherwise would have taken another mode of transportation.

As robotaxis are on the cusp of proliferating across the world, we are about to repeat the same mistake, but at a far greater scale… [W]e cannot let a shiny new piece of technology drive us into an epic traffic jam of our own making. The best way to make urban mobility accessible, efficient and green is not about new technologies — neither self-driving cars nor electric ones — but old ones. Buses, subways, bikes and our own two feet are cleaner, cheaper and more efficient than anything Silicon Valley has dreamt up… Autonomous technology could, for example, allow cities to offer more buses, shuttles and other forms of public transit around the clock. That’s because the availability of on-demand AVs could assure “last-mile” connections between homes and transit stops. It could also be a godsend for older people and those with disabilities. However, any scale-up of AVs should be counterbalanced with investments in mass transit and improvements in walkability.

Above all, we must put in place smart regulatory and tax regimes that allow all sustainable mobility modes — including autonomous services — to scale safely and intelligently. They should include, for example, congestion fees to discourage overuse of individual vehicles.

Almost everyone in Europe is breathing toxic air

Analysis of data gathered using cutting-edge methodology — including detailed satellite images and measurements from more than 1,400 ground monitoring stations — reveals a dire picture of dirty air, with 98% of people living in areas with highly damaging fine particulate pollution that exceed World Health Organization guidelines. Almost two-thirds live in areas where air quality is more than double the WHO’s guidelines.

The worst hit country in Europe is North Macedonia. Almost two-thirds of people across the country live in areas with more than four times the WHO guidelines for PM2.5, while four areas were found to have air pollution almost six times the figure, including in its capital, Skopje. Eastern Europe is significantly worse than western Europe, apart from Italy, where more than a third of those living in the Po valley and surrounding areas in the north of the country breath air that is four times the WHO figure for the most dangerous airborne particulates.

Secrecy undermines trust in Google antitrust trial

Before a single witness could utter a word of testimony in the Google antitrust case on Tuesday, the public and the press were temporarily barred from the courtroom. It’s just another step in a long list of anti-transparency measures styming access to the case: documents and testimony have been repeatedly sealed; exhibits used in open court have been removed from the internet; and only those who can actually make it to the courtroom are permitted to listen to the testimony (when they’re allowed in at all, that is).

Despite these restrictions, reporters and courtwatchers have been doing their best to inform their audiences about the trial. But if the federal judge presiding over the case, Amit Mehta, doesn’t act soon to stop this tsunami of secrecy, people may be left mostly in the dark about the biggest antitrust lawsuit of the 21st century.

Behind this anti-transparency push are Google and other big tech companies arguing that letting people observe the case fully could reveal trade secrets or otherwise embarrass them by generating “clickbait.” There is some precedent for closing parts of trials or redacting court documents to avoid disclosing trade secrets. But not to save corporations from embarrassment.