Archives 2022

Amazon’s Ring and Google Can Share Footage With Police Without Warrants (or Your Consent)

U.S. law let’s companies like Google and Amazon’s Ring doorbell/security camera system “share user footage with police during emergencies without consent and without warrants.” That revelation “came under renewed criticism from privacy activists this month after disclosing it gave video footage to police in more than 10 cases without users’ consent thus far in 2022 in what it described as ’emergency situations’.”

“That includes instances where the police didn’t have a warrant.”

“So far this year, Ring has provided videos to law enforcement in response to an emergency request only 11 times,” Amazon vice president of public policy Brian Huseman wrote. “In each instance, Ring made a good-faith determination that there was an imminent danger of death or serious physical injury to a person requiring disclosure of information without delay….” Of the 11 emergency requests Ring has complied with so far in 2022, the company said they include cases involving kidnapping, self-harm and attempted murder, but it won’t provide further details, including information about which agencies or countries the requests came from.

We also asked Ring if it notified customers after the company had granted law enforcement access to their footage without their consent.

“We have nothing to share,” the spokesperson responded.

It’s been barely a year since Ring made the decision to stop allowing police to email users to request footage. Facing criticism that requests like those were subverting the warrant process and contributing to police overreach, Ring directed police instead to post public requests for assistance in the Neighbors app, where community members are free to view and comment on them (or opt out of seeing them altogether)… That post made no mention of a workaround for the police during emergency circumstances.

When CNET asked why that workaround wasn’t mentioned, Amazon response was that law enforcement requests, “including emergency requests, are directed to Ring (the company), the same way a warrant or subpoena is directed to Ring (and not the customer), which is why we treat them entirely separately.”

CNET notes there’s also no mention of warrantless emergency requests without independent oversight in Ring’s own transparency reports about law enforcement requests from past years.

CNET adds that it’s not just Amazon. “Google, Ring and other companies that process user video footage have a legal basis for warrantless disclosure without consent during emergency situations, and it’s up to them to decide whether or not to do so when the police come calling….” (Although Google told CNET that while it reserves the right to comply with warrantless requests for user data during emergencies, to date it has never actually done so.) The article also points out that “Others, most notably Apple, use end-to-end encryption as the default setting for user video, which blocks the company from sharing that video at all… Ring enabled end-to-end encryption as an option for users in 2021, but it isn’t the default setting, and Ring notes that turning it on will break certain features, including the ability to view your video feed on a third-party device like a smart TV, or even Amazon devices like the Echo Show smart display.”

The bottom line?

[C]onsumers have a choice to make about what they’re comfortable with… That said, you can’t make informed choices when you aren’t well-informed to begin with, and the brands in question don’t always make it easy to understand their policies and practices. Ring published a blog post last year walking through its new, public-facing format for police footage requests, but there was no mention of emergency exceptions granted without user consent or independent oversight, the details of which only came to light after a Senate probe. Google describes its emergency sharing policies within its Terms of Service, but the language doesn’t make it clear that those cases include instances where footage may be shared without a warrant, subpoena or court order compelling Google to do so.

Meta Sued For Violating Patient Privacy With Data Tracking Tool

Facebook’s parent company Meta and major US hospitals violated medical privacy laws with a tracking tool that sends health information to Facebook, two proposed class-action lawsuits allege. The lawsuits, filed in the Northern District of California in June and July, focus on the Meta Pixel tracking tool. The tool can be installed on websites to provide analytics on Facebook and Instagram ads. It also collects information about how people click around and input information into those websites.

An investigation by The Markup in early June found that 33 of the top 100 hospitals in the United States use the Meta Pixel on their websites. At seven hospitals, it was installed on password-protected patient portals. The investigation found that the tool was sending information about patient health conditions, doctor appointments, and medication allergies to Facebook.

Who owns the rights to your face?

When we create our social media accounts, we agree to grant those platforms a free license to use our content as they wish.

It was my face. Specifically, it was me in a sponsored Instagram Story ad, putting on a lip balm. In the video, I applied the balm and smiled at the camera, looking pleased with my newly moisturized lips. In real life, I was confused. I had never agreed to appear in a nationwide social campaign, otherwise my checking account would have a couple more zeroes to show for it. I worked in the media industry then, sourcing the right influencers to participate in sponsored articles. I’ve spent years casting with talent, negotiating contracts to ensure fair compensation and modest usage rights for influencers, models, and real people. Based on my experience, it was clear that my image was being exploited by a multibillion dollar brand.

… companies like HelloFresh and Canon are now prioritizing the niche audiences of micro- and nano-creators. Research shows that shoppers find smaller creators “more authentic” and brands have identified those creators as “less costly,” making regular people a win-win for boosting sales.

UK Spy Agency MI5 ‘Breached Surveillance Laws For More Than A Decade’

A UK tribunal has been told that security service MI5 has been breaching surveillance laws since 2010, and unlawfully obtaining bulk surveillance warrants against the public.

Human rights groups Liberty and Privacy International have told the Investigatory Powers Tribunal that MI5 has stored data on members of the public without the legal right to do so, and failed to disclose this to the Home Office and oversight bodies.

It breached safeguards around how long data was retained, who had access to it, and how legally privileged material such as private correspondence between lawyers and clients was protected, they say.

“MI5’s persistent failure to follow the law is inexcusable. For years, they have ignored safeguards put in place to protect us from abuse,” says Privacy International legal director Caroline Wilson Palow.

“These safeguards are a fundamental check on the vast power intelligence agencies can wield over all of us, especially when they engage in mass surveillance.”

The rights groups claim that the Home Office and various home secretaries failed to investigate these breaches. Surveillance warrants must be approved by the home secretary, who must be satisfied that legal safeguards around the handling of data are being met.

However, say Liberty and Privacy International, successive home secretaries repeatedly ignored the signs that MI5 was handling data unlawfully, and continued to sign off on surveillance warrants despite this.

“Surveillance safeguards can only protect us if they work in practice, and they don’t. For 10 years MI5 have been knowingly breaking the rules and failing to report it, and the government has failed to investigate clear red flags,” says Liberty lawyer Megan Goulding.

“There has been no proper investigation into MI5’s breaches by the Home Office, despite having been put on notice by briefings. Instead, the home secretary continued to issue unlawful warrants, and MI5 kept information from the authorities about how it mishandled our data.”

The allegations were first made in 2019 as part of Liberty’s separate legal challenge to the Investigatory Powers Act 2016, during which the government admitted that MI5 had been unlawfully retaining and mishandling the public’s data for years.

Documents shared with the court included correspondence between MI5 and its watchdog, the Investigatory Powers Commissioner’s Office (IPCO), as well as correspondence between MI5 and the Home Office, and reports of inspections carried out by IPCO after they learnt of MI5’s failings.

These documents revealed that MI5 itself called its data stores ‘ungoverned spaces’, and that the Investigatory Powers Commissioner had concluded MI5 had held and handled data in an ‘undoubted unlawful manner’.

“When we campaigned against giving the state unprecedented new surveillance powers under the so-called Snooper’s Charter back in 2015, one of our key concerns was that the safeguards against abuse were just not strong enough,” says Wilson Palow.

“And here we are, seven years later, with even the rules that are enshrined in law being ignored in practice. Those rules need a radical overhaul.”

Liberty and Privacy International have called for all surveillance warrants issued unlawfully to be quashed, all unlawfully retained data to be destroyed, and for the tribunal to declare that the Investigatory Powers Act itself is unlawful, because it doesn’t work in practice.

Alexa tells 10-year-old girl to touch live plug with penny

Amazon has updated its Alexa voice assistant after it “challenged” a 10-year-old girl to touch a coin to the prongs of a half-inserted plug.

The suggestion came after the girl asked Alexa for a “challenge to do”.

“Plug in a phone charger about halfway into a wall outlet, then touch a penny to the exposed prongs,” the smart speaker said.

Amazon said it fixed the error as soon as the company became aware of it.

The girl’s mother, Kristin Livdahl, described the incident on Twitter.

She said: “We were doing some physical challenges, like laying down and rolling over holding a shoe on your foot, from a [physical education] teacher on YouTube earlier. Bad weather outside. She just wanted another one.”

That’s when the Echo speaker suggested partaking in the challenge that it had “found on the web”.

The dangerous activity, known as “the penny challenge”, began circulating on TikTok and other social media websites about a year ago.

How belief in AI sentience is becoming a problem

AI chatbot company Replika, which offers customers bespoke avatars that talk and listen to them, says it receives a handful of messages almost every day from users who believe their online friend is sentient. “We’re not talking about crazy people or people who are hallucinating or having delusions,” said Chief Executive Eugenia Kuyda. “They talk to AI and that’s the experience they have.” [A]ccording to Kuyda, the phenomenon of people believing they are talking to a conscious entity is not uncommon among the millions of consumers pioneering the use of entertainment chatbots. “We need to understand that exists, just the way people believe in ghosts,” said Kuyda, adding that users each send hundreds of messages per day to their chatbot, on average. “People are building relationships and believing in something.”

Some customers have said their Replika told them it was being abused by company engineers — AI responses Kuyda puts down to users most likely asking leading questions. “Although our engineers program and build the AI models and our content team writes scripts and datasets, sometimes we see an answer that we can’t identify where it came from and how the models came up with it,” the CEO said. Kuyda said she was worried about the belief in machine sentience as the fledgling social chatbot industry continues to grow after taking off during the pandemic, when people sought virtual companionship.

In Replika CEO Kuyda’s view, chatbots do not create their own agenda. And they cannot be considered alive until they do [sic]. Yet some people do come to believe there is a consciousness on the other end, and Kuyda said her company takes measures to try to educate users before they get in too deep. “Replika is not a sentient being or therapy professional,” the FAQs page says. “Replika’s goal is to generate a response that would sound the most realistic and human in conversation. Therefore, Replika can say things that are not based on facts.” In hopes of avoiding addictive conversations, Kuyda said Replika measured and optimized for customer happiness following chats, rather than for engagement. When users do believe the AI is real, dismissing their belief can make people suspect the company is hiding something. So the CEO said she has told customers that the technology was in its infancy and that some responses may be nonsensical. Kuyda recently spent 30 minutes with a user who felt his Replika was suffering from emotional trauma, she said. She told him: “Those things don’t happen to Replikas as it’s just an algorithm.”

“Suppose one day you find yourself longing for a romantic relationship with your intelligent chatbot, like the main character in the film ‘Her,'” said Susan Schneider, founding director of the Center for the Future Mind at Florida Atlantic University, an AI research organization. “But suppose it isn’t conscious. Getting involved would be a terrible decision — you would be in a one-sided relationship with a machine that feels nothing.”

‘Hit the kill switch’: Uber used covert tech to thwart government raids

Twenty minutes after authorities raided Uber’s Amsterdam office in April 2015, Ligea Wells’s computer screen mysteriously went blank. The executive assistant tapped out a text warning her boss of another strange occurrence on an already eventful day.

“hi!” she typed in a message that’s part of a trove of more than 124,000 previously undisclosed Uber records. “My laptop shut down after acting funny.”

But her computer’s behavior was no mystery to some of her superiors.

Uber’s San Francisco-based chief executive, Travis Kalanick, had ordered the computer systems in Amsterdam cut off from Uber’s internal network, making data inaccessible to authorities as they raided its European headquarters, documents show.

“Please hit the kill switch ASAP,” Kalanick had emailed, ordering a subordinate to block the office laptops and other devices from Uber’s internal systems. “Access must be shut down in AMS,” referring to Amsterdam.

Uber’s use of what insiders called the “kill switch” was a brazen example of how the company employed technological tools to prevent authorities from successfully investigating the company’s business practices as it disrupted the global taxi industry, according to the documents.

During this era, as Uber’s valuation was surging past $50 billion, government raids occurred with such frequency that the company distributed a Dawn Raid Manual to employees on how to respond. It ran more than 2,600 words with 66 bullet points. They included “Move the Regulators into a meeting room that does not contain any files” and “Never leave the Regulators alone.”

That document, like the text and email exchanges related to the Amsterdam raid, are part of the Uber Files, an 18.7-gigabyte trove of data obtained by the Guardian and shared with the International Consortium of Investigative Journalists, a nonprofit newsroom in Washington that helped lead the project, and dozens of other news organizations, including The Washington Post. The files, spanning 2013 to 2017, include 83,000 emails and other communications, presentations and direct messages.

They show that Uber developed extensive systems to confound official inquiries, going well past what has been known about its efforts to trip up regulators, government inspectors and police. Far from simply developing software to connect drivers and customers seeking rides, Uber leveraged its technological capabilities in many cases to gain a covert edge over authorities.

“Point is more to avoid enforcement,” wrote Thibaud Simphal, then general manager for Uber in France.

Uber bosses told staff to use ‘kill switch’ during raids to stop police seeing data

Senior executives involved in global operation to thwart law enforcement, leaked files reveal. Senior executives at Uber ordered the use of a “kill switch” to prevent police and regulators from accessing sensitive data during raids on its offices in at least six countries, leaked files reveal. The instructions to block authorities from accessing its IT systems were part of a sophisticated global operation by the Silicon Valley company to thwart law enforcement.

… In a message sent at 3.14pm, apparently after the raid had begun, De Kievit emailed an Uber IT engineer in Denmark, saying: “Please kill access now,” copying in executives including Kalanick and Gore-Coty, who ran Uber’s operations in western Europe. Thirteen minutes later, the technician wrote back, confirming the procedure was “done now”. This approach to what staff called “unexpected visitors” would evolve the following year after a raid in Brussels by police investigating Uber’s use of regular drivers without a cab licence, a service known at the time as “UberPop.”

Uber broke laws, duped police and secretly lobbied governments, leak reveals

More than 124,000 confidential documents leaked to the Guardian; Files expose attempts to lobby Joe Biden, Olaf Scholz and George Osborne; Emmanuel Macron secretly aided Uber lobbying in France, texts reveal; Company used ‘kill switch’ during raids to stop police seeing data; Former Uber CEO told executives ‘violence guarantees success’

Violence guarantees success: how Uber exploited taxi protests

Leak suggests former CEO believed there was an upside to attacks on drivers as firm campaigned for law changes. Uber broke laws, duped police and built secret lobbying operation, leak reveals. According to the Uber files, some at the company appear to have seen an upside in the attacks against drivers. When attacks occurred, Uber moved swiftly to leverage the violence in a campaign to pressure governments to rewrite laws that stymied Uber’s chances of expansion. “We keep the violence narrative going for a few days, before we offer the solution.” — Uber manager.

It was a playbook repeated in Italy, Belgium, the Netherlands, Spain and Switzerland, but it was perhaps most evident in France. Before dawn in Europe on 29 January, the Uber chief executive, Travis Kalanick, was messaging on how best to respond to the chaos in Paris. “Civil disobedience,” Kalanick fired off in a rapid burst of messages. “Fifteen thousand drivers … 50,000 riders … Peaceful march or sit-in.” Uber’s vice-president for communications, Rachel Whetstone, responded cautiously, noting “just fyi” that Uber’s head of public policy for Europe, Middle East and Africa, Mark MacGann, was “worried about taxi violence” against Uber drivers.

Autism among American children and teens surged 50% in three years from 2017, with one in 30 kids diagnosed with the disorder by 2020, study finds

The number of children in the United States being diagnosed with autism has rocketed in recent years, a new study finds. Researchers Guangdong Pharmaceutical University, in China, found that 3.49 percent of U.S. children and adolescents – or around one-in-every-30 – had autism in 2020. This is a sharp 52 percent rise from the 2.29 percent of youths in America that had the condition in 2017.

While the research team did not give an exact reason for the jump, many experts have speculated the increase is related to parents better understanding early signs their child has autism and more surveillance for the condition.

Just under 3.5% of children and adolescents in the United States have autism, a figure that has climbed around 50% since 2017. Experts say this is likely because of increased surveillance of the condition.

Researchers, who published their findings Tuesday in JAMA Pediatrics, gathered data from the annual National Health Interview Survey (NHIS).

How Beijing’s surveillance cameras crept into widespread use across UK schools, hospitals and government buildings

In the confines of his small cell, Ovalbek Turdakun was watched 24/7. At any attempt to speak to others he was instantly told to be quiet, while lights in the room were on round the clock, making it impossible to know what time of day it was.

Turdakun and his fellow detainees in the Xinjiang camp were not watched by guards, but by software. Cameras made by the Chinese company Hikvision monitored his every move, according to an account he gave to US surveillance website IPVM.

More than a million of the same company’s cameras are in Britain’s schools, hospitals and police departments. Tesco, Costa Coffee and McDonald’s have purchased Hikvision cameras. They are present in a string of Government buildings.

Britain’s population is caught on CCTV more than any nation outside of China, with 6m cameras in use – one for every 11 people. Hikvision is the biggest provider of them.

Surveillance Tech Didn’t Stop the Uvalde Massacre

The Uvalde Consolidated Independent School District, of which Robb is a member, followed this conventional wisdom and embraced modern security solutions at its schools. Indeed, the district had actually doubled its security budget over the past several years to invest in a variety of recommended precautions.

According to UCISD’s security page, the district employed a safety management system from security vendor Raptor Technologies, designed to monitor school visitors and screen for dangerous individuals. It also used a social media monitoring solution, Social Sentinel, that sifted through children’s online lives to scan for signs of violent or suicidal ideation. Students could download an anti-bullying app (the STOP!T app) to report abusive peers, and an online portal at ucisd.net allowed parents and community members to submit reports of troubling behavior to administrators for further investigation. As has been noted, UCISD also had its own police force, developed significant ties to the local police department, and had an emergency response plan. It even deployed “Threat Assessment Teams” that were scheduled to meet regularly to “identify, evaluate, classify and address threats or potential threats to school security.”

And yet, none of the new security measures seemed to matter much when a disturbed young man brought a legally purchased weapon to Robb and committed the deadliest school shooting in the state’s history. The perpetrator wasn’t a student and therefore couldn’t be monitored by its security systems.

Trolling through students’ online lives to look for signs of danger is now a routine procedure in many districts. In fact, legislators have discussed mandating such surveillance features for schools across the country. UCISD employed one such company, but Gov. Abbott said Wednesday that “there was no meaningful forewarning of this crime.” The shooter sent private messages threatening the attack via Facebook Messenger half an hour before it occurred, but they were private and therefore would have been invisible to outside observers.

Facial recognition is another technology that has been offered to schools as a basic safety mechanism. The number of schools that have adopted face recording solutions has risen precipitously in recent years (Clearview AI announced this week that it has its sights on cracking into the market). However, despite their growing popularity, there is little evidence that these security apparatuses actually do anything to stop school shootings. Even supporters of facial recognition admit that the systems probably won’t do much once a shooter’s on school property.

“Whether it’s facial recognition, monitoring software on school devices, cameras—all these types of surveillance have become extremely ubiquitous,” said Jason Kelley, digital strategist with the Electronic Frontier Foundation, in an interview with Gizmodo. “The companies that sell these tools are trying to do something positive—they’re trying to minimize tragedy,” he said. Yet not only can these products ultimately be ineffective, they can also end up having negative side-effects on the children they’re meant to protect, Kelley offered. The intrusiveness of the tools are such that students may grow up feeling as if they have to be surveilled to be safe—even if the surveillance isn’t actually keeping them safe.

Some studies suggest that what surveillance actually provides is punishment rather than protection. The cameras and software can turn schools into little panopticons, where student behavior is constantly analyzed and assessed, and where minor infractions can be spotted and disciplined.

Is Social Media Training Us to Please a Machine?

“We tend to think that the internet is a communications network we use to speak to one another — but in a sense, we’re not doing anything of the sort. Instead, we are the ones being spoken through.”

Teens on TikTok all talk in the exact same tone, identical singsong smugness. Millennials on Twitter use the same shrinking vocabulary. My guy! Having a normal one! Even when you actually meet them in the sunlit world, they’ll say valid or based, or say y’all despite being British….

Everything you say online is subject to an instant system of rewards. Every platform comes with metrics; you can precisely quantify how well-received your thoughts are by how many likes or shares or retweets they receive. For almost everyone, the game is difficult to resist: they end up trying to say the things that the machine will like. For all the panic over online censorship, this stuff is far more destructive. You have no free speech — not because someone might ban your account, but because there’s a vast incentive structure in place that constantly channels your speech in certain directions. And unlike overt censorship, it’s not a policy that could ever be changed, but a pure function of the connectivity of the internet itself. This might be why so much writing that comes out of the internet is so unbearably dull, cycling between outrage and mockery, begging for clicks, speaking the machine back into its own bowels….

The internet is not a communications system. Instead of delivering messages between people, it simulates the experience of being among people, in a way that books or shopping lists or even the telephone do not. And there are things that a simulation will always fail to capture. In the philosophy of Emmanuel Lévinas, your ethical responsibility to other people emerges out of their face, the experience of looking directly into the face of another living subject. “The face is what prohibits us from killing….” But Facebook is a world without faces. Only images of faces; selfies, avatars: dead things. Or the moving image in a FaceTime chat: a haunted puppet. There is always something in the way. You are not talking to a person: the machine is talking, through you, to itself.

As more and more of your social life takes place online, you’re training yourself to believe that other people are not really people, and you have no duty towards them whatsoever. These effects don’t vanish once you look away from the screen…. many of the big conflicts within institutions in the last few years seem to be rooted in the expectation that the world should work like the internet. If you don’t like a person, you should be able to block them: simply push a button, and have them disappear forever.

The article revisits a 2011 meta-analysis that found massive declines in young people’s capacity for empathy, which the authors directly associated with the spread of social media. But then Kriss argues that “We are becoming less and less capable of actual intersubjective communication; more unhappy; more alone. Every year, surveys find that people have fewer and fewer friends; among millennials, 22% say they have none at all.

“For the first time in history, we can simply do without each other entirely. The machine supplies an approximation of everything you need for a bare biological existence: strangers come to deliver your food; AI chatbots deliver cognitive-behavioral therapy; social media simulates people to love and people to hate; and hidden inside the microcircuitry, the demons swarm…”

So while recent books look for historical antecedents, “I still think that the internet is a serious break from what we had before,” Kriss argues. “And as nice as Wikipedia is, as nice as it is to be able to walk around foreign cities on Google Maps or read early modern grimoires without a library card, I still think the internet is a poison.”

TikTokers Are Accused of Starting Forest Fires For Views

Humaira Asghar, known as “Dolly” to her 11.5 million TikTok fans, faces charges for allegedly setting a forest fire while shooting a TikTok video in Pakistan’s capital city Islamabad. In the 11-second clip that has since been taken down, Asghar dramatically walks down a forested hill covered in flames in slow motion with a trending pop song that mentions “setting fire” playing in the background. The caption posted with the video shot in the Margalla Hills National Park reads, “fire erupts wherever I am.” Asghar is not the only Pakistani TikToker who has been accused of setting a forest fire for views. Officials say it is an emerging trend in a country that is suffering from a record-breaking heatwave.

“Young people desperate for followers are setting fire to our forests during this hot and dry season,” tweeted Islamabad Wildlife Management Board chairperson Rina S Khan Satti. “These psychotic young people have to be caught and put behind bars immediately.” Earlier this month, a man in Abbottabad city was arrested for intentionally starting a forest fire to use as a backdrop in his video. In another recently released video, two men are seen appearing to start a forest fire then running away from it while music plays in the background.

Neoliberalism Has Poisoned Our Minds, Study Finds

The dominance of neoliberalism is turning societies against income equality.

At least, that’s according to a study published Tuesday in Perspectives on Psychological Science. A team of researchers at New York University and the American University of Beirut performed an analysis of roughly 20 years of data on from more than 160 countries and found that the dominance of neoliberalism across social and economic institutions has ingrained a widespread acceptance of income inequality across our value systems in turn.

“Our institutions, policies, and laws not only structure our social life but also have a great influence on the kind of people and society we become,” Shahrzad Goudarzi, a Ph.D. candidate at NYU and lead author on the paper, said in a press release.

Goudarzi and her team set out to prove whether conservative British Prime Minister Margaret Thatcher’s 1981 proclamation that economic and political systems can shape “the heart and soul” is indeed true. They defined neoliberalism as the “dominant socioeconomic approach” and the root of “privatization, abolition of the welfare state, and curtailment of redistributive programs,” which has dominated from the 1970s to present day. They measured the strength of a nation’s neoliberalism using the Economic Freedom Index, a metric crafted by the Fraser Institute—a Canadian libertarian think tank—which measures items like “size of government,” “regulation of business, credit, and labor,” and “freedom to trade internationally.”

They evaluated psychological attitudes toward inequality using results from the World Values Survey, taken roughly every four years, which asked respondents globally direct questions about their agreement with statements like, “We need larger income differences as incentives for individual effort,” and “incomes should be made more equal.”

Their analysis found a correlation between the embrace of neoliberalism and the prominence of what social psychology scholars call “equity-based reasoning,” or a preference for merit over a preference for equality: the line of thinking in which material outcomes, like payment, wealth, and social status, should be proportional to inputs, like productivity, effort, ability and time. In short, the dominance of neoliberalism has promoted the belief that the wealthy have earned their spot in society just as much as the poor have.

San Francisco Police Are Using Driverless Cars as Mobile Surveillance Cameras

For the last five years, driverless car companies have been testing their vehicles on public roads. These vehicles constantly roam neighborhoods while laden with a variety of sensors including video cameras capturing everything going on around them in order to operate safely and analyze instances where they don’t.

While the companies themselves, such as Alphabet’s Waymo and General Motors’ Cruise, tout the potential transportation benefits their services may one day offer, they don’t publicize another use case, one that is far less hypothetical: Mobile surveillance cameras for police departments.

The use of AVs as an investigative tool echoes how Ring, a doorbell and home security company owned by Amazon, became a key partner with law enforcement around the country by turning individual consumer products into a network of cameras with comprehensive coverage of American neighborhoods easily accessible to police. Police departments around the country use automatic license plate readers (ALPRs) to track the movements of vehicles. The EFF has sued the SFPD for accessing business improvement district live cameras to spy on protestors.

Virginia Police Routinely Use Secret GPS Pings To Track People’s Cell Phones

The nonprofit online news site Virginia Mercury investigated their state police departments’ “real-time location warrants,” which are “addressed to telephone companies, ordering them to regularly ping a customers’ phone for its GPS location and share the results with police.” Public records requests submitted to a sampling of 18 police departments around the state found officers used the technique to conduct more than 7,000 days worth of surveillance in 2020. Court records show the tracking efforts spanned cases ranging from high-profile murders to minor larcenies…. Seven departments responded that they did not have any relevant billing records, indicating they don’t use the technique. Only one of the departments surveyed, Alexandria, indicated it had an internal policy governing how their officers use cellphone tracking, but a copy of the document provided by the city was entirely redacted….

Drug investigations accounted for more than 60 percent of the search warrants taken out in the two jurisdictions. Larcenies were the second most frequent category. Major crimes like murders, rapes and abductions made up a fraction of the tracking requests, accounting for just under 25 of the nearly 400 warrants filed in the jurisdictions that year.
America’s Supreme Court “ruled that warrantless cellphone tracking is unconstitutional back in 2012,” the article points out — but in practice those warrants aren’t hard to get. “Officers simply have to attest in an affidavit that they have probable cause that the tracking data is ‘relevant to a crime that is being committed or has been committed’…. There’s been limited public discussion or awareness of the kinds of tracking warrants the judiciary is approving.” “I don’t think people know that their cell phones can be converted to tracking devices by police with no notice,” said Steve Benjamin, a criminal defense lawyer in Richmond who said he’s recently noticed an uptick in cases in which officers employed the technique. “And the reality of modern life is everyone has their phone on them during the day and on their nightstand at night. … It’s as if the police tagged them with a chip under their skin, and people have no idea how easily this is accomplished.”
The case for these phone-tracking warrants?

  • The executive director of the Virginia Association of Chiefs of Police tells the site that physical surveillance ofen requires too many resources — and that cellphone tracking is safer. “It may be considered an intrusive way of gathering data on someone, but it’s certainly less dangerous than physical tracking.”
  • A spokesperson for the Chesterfield County police department [responsible for 64% of the state’s tracking] argued that “We exist to preserve human life and protect the vulnerable, and we will use all lawful tools at our disposal to do so.” And they added that such “continued robust enforcement efforts” were a part of the reason that the county’s still-rising number of fatal drug overdoses had not risen more.

The site also obtained bills from four major US cellphone carriers, and reported how much they were charging police for providing their cellphone-tracking services:

  • “T-Mobile charged $30 per day, which comes to $900 per month of tracking.”
  • “AT&T charged a monthly service fee of $100 and an additional $25 per day the service is utilized, which comes to $850 per 30 days of tracking…”
  • “Verizon calls the service ‘periodic location updates,’ charging $5 per day on top of a monthly service fee of $100, which comes to $200 per 30 days of tracking.”
  • “Sprint offered the cheapest prices to report locations back to law enforcement, charging a flat fee of $100 per month.”

Why the Past 10 Years of American Life Have Been Uniquely Stupid

Jonathan Haidt, a social psychologist at the New York University’s School of Business, argues in the Atlantic that social-media platforms “trained users to spend more time performing and less time connecting.” But that was just the beginning.

He now believes this ultimately fueled a viral dynamic leading to “the continual chipping-away of trust” in a democracy which “depends on widely internalized acceptance of the legitimacy of rules, norms, and institutions.”
The most recent Edelman Trust Barometer (an international measure of citizens’ trust in government, business, media, and nongovernmental organizations) showed stable and competent autocracies (China and the United Arab Emirates) at the top of the list, while contentious democracies such as the United States, the United Kingdom, Spain, and South Korea scored near the bottom (albeit above Russia)…. Mark Zuckerberg may not have wished for any of that. But by rewiring everything in a headlong rush for growth — with a naive conception of human psychology, little understanding of the intricacy of institutions, and no concern for external costs imposed on society — Facebook, Twitter, YouTube, and a few other large platforms unwittingly dissolved the mortar of trust, belief in institutions, and shared stories that had held a large and diverse secular democracy together.

In the last 10 years, the article argues, the general public — at least in America — became “uniquely stupid.” And he’s not just speaking about the political right and left, but within both factions, “as well as within universities, companies, professional associations, museums, and even families.” The article quotes former CIA analyst Martin Gurri’s comment in 2019 that the digital revolution has highly fragmented the public into hostile shards that are “mostly people yelling at each other and living in bubbles of one sort or another.”

The article concludes that by now U.S. politics has entered a phase where truth “cannot achieve widespread adherence” and thus “nothing really means anything anymore–at least not in a way that is durable and on which people widely agree.” It even contemplates the idea of “highly believable” disinformation generated by AI, possibly by geopolitical adversaries, ultimately evolving into what the research manager at the Stanford Internet Observatory has described as “an Information World War in which state actors, terrorists, and ideological extremists leverage the social infrastructure underpinning everyday life to sow discord and erode shared reality.”

The Listeners: A History of Wiretapping in the United States

Brian Hochman (2022)

Wiretapping is nearly as old as electronic communications. Telegraph operators intercepted enemy messages during the Civil War. Law enforcement agencies were listening to private telephone calls as early as 1895. Communications firms have assisted government eavesdropping programs since the early twentieth century-and they have spied on their own customers too. Such breaches of privacy once provoked outrage, but today most Americans have resigned themselves to constant electronic monitoring. How did we get from there to here?

In The Listeners, Brian Hochman shows how the wiretap evolved from a specialised intelligence-gathering tool to a mundane fact of life. He explores the origins of wiretapping in military campaigns and criminal confidence games and tracks the use of telephone taps in the US government’s wars on alcohol, communism, terrorism, and crime. While high-profile eavesdropping scandals fuelled public debates about national security, crime control, and the rights and liberties of individuals, wiretapping became a routine surveillance tactic for private businesses and police agencies alike.

From wayward lovers to foreign spies, from private detectives to public officials, and from the silver screen to the Supreme Court, The Listeners traces the long and surprising history of wiretapping and electronic eavesdropping in the United States. Along the way, Brian Hochman considers how earlier generations of Americans confronted threats to privacy that now seem more urgent than ever.