Resources

Google Fires Engineer Who Claimed Company’s AI Is Sentient

In June, Google placed Lemoine on paid administrative leave for breaching its confidentiality agreement after he contacted members of the government about his concerns and hired a lawyer to represent LaMDA. […] Google maintains that it “extensively” reviewed Lemoine’s claims and found that they were “wholly unfounded.” This aligns with numerous AI experts and ethicists, who have said that his claims were, more or less, impossible given today’s technology. Lemoine claims his conversations with LaMDA’s chatbot lead him to believe that it has become more than just a program and has its own thoughts and feelings, as opposed to merely producing conversation realistic enough to make it seem that way, as it is designed to do. He argues that Google’s researchers should seek consent from LaMDA before running experiments on it (Lemoine himself was assigned to test whether the AI produced hate speech) and published chunks of those conversations on his Medium account as his evidence.

252

Chess Robot Breaks Child’s Finger at Russia Tournament

During a tournament in Moscow, a chess-playing robot fractured a 7-year-old boy’s finger when the youngster attempted a quick move without giving the device enough time to finish its task. On July 19, at the Moscow Chess Open competition, the incident took place. The youngster is fine, but one of his fingers has been broken, according to Sergey Smagin, vice president of the Russian Chess Federation, who spoke to state-run news organisation RIA Novosti.

The boy, Christopher, is one of the top 30 young chess players in Moscow, and he is just nine years old. In a nation where chess has essentially become a national obsession and source of pride, that makes him very good.

230

The Unsolved Mystery Attack on Internet Cables in Paris

Buried deep beneath your feet lie the cables that keep the internet online. Crossing cities, countrysides, and seas, the internet backbone carries all the data needed to keep economies running and your Instagram feed scrolling. Unless, of course, someone chops the wires in half. On April 27, an unknown individual or group deliberately cut crucial long-distance internet cables across multiple sites near Paris, plunging thousands of people into a connectivity blackout. The vandalism was one of the most significant internet infrastructure attacks in France’s history and highlights the vulnerability of key communications technologies. Now, months after the attacks took place, French internet companies and telecom experts familiar with the incidents say the damage was more wide-ranging than initially reported and extra security measures are needed to prevent future attacks. In total, around 10 internet and infrastructure companies — from ISPs to cable owners — were impacted by the attacks, telecom insiders say. The assault against the internet started during the early hours of April 27. “The people knew what they were doing,” says Michel Combot, the managing director of the French Telecoms Federation, which is made up of more than a dozen internet companies. In the space of around two hours, cables were surgically cut and damaged in three locations around the French capital city — to the north, south, and east — including near Disneyland Paris.

“Those were what we call backbone cables that were mostly connecting network service from Paris to other locations in France, in three directions,” Combot says. “That impacted the connectivity in several parts of France.” As a result, internet connections dropped out for some people. Others experienced slower connections, including on mobile networks, as internet traffic was rerouted around the severed cables. All three incidents are believed to have happened at roughly the same time and were conducted in similar ways — distinguishing them from other attacks against telecom towers and internet infrastructure. “The cables are cut in such a way as to cause a lot of damage and therefore take a huge time to repair, also generating a significant media impact,” says Nicolas Guillaume, the CEO of telecom firm Nasca Group, which owns business ISP Netalis, one of the providers directly impacted by the attacks. “It is the work of professionals,” Guillaume says, adding that his company launched a criminal complaint with Paris law enforcement officials following the incident. Two things stand out: how the cables were severed and how the attacks happened in parallel. Photos posted online by French internet company Free 1337 immediately after the attacks show that a ground-level duct, which houses cables under the surface, was opened and the cables cut. Each cable, which can be around an inch in diameter, appears to have straight cuts across it, suggesting the attackers used a circular saw or other type of power tool. Many of the cables have been cut in two places and appear to have a section missing. If they had been cut in one place they could potentially have been reconnected, but the multiple cuts made them harder to repair.

239

Google’s Nest Will Provide Data to Police Without a Warrant

Google “reserves the right” to make emergency disclosures to law enforcement even when there is no legal requirement to do so. “A provider like Google may disclose information to law enforcement without a subpoena or a warrant ‘if the provider, in good faith, believes that an emergency involving danger of death or serious physical injury to any person requires disclosure without delay of communications relating to the emergency,'” a Nest spokesperson tells CNET.

While Amazon and Google have both said they would hand over a user’s data to law enforcement without a warrant, Arlo, Apple, Wyze, and Anker, owner of Eufy, all confirmed to CNET that they won’t give authorities access to a user’s smart home camera’s footage unless they’re shown a warrant or court order. These companies would be legally bound to provide data to the authorities if they were shown a legal document. But, unlike Google and Amazon, they will not otherwise share camera footage with law enforcement, even if they had an emergency request for data. Apple’s default setting for video cameras connected via Homekit is end-to-end encryption which means the company is unable to share user video at all.

311

EU Found Evidence Employee Phones Compromised With Spyware

In a July 25 letter sent to European lawmaker Sophie in ‘t Veld, EU Justice Commissioner Didier Reynders said iPhone maker Apple had told him in 2021 that his iPhone had possibly been hacked using Pegasus, a tool developed and sold to government clients by Israeli surveillance firm NSO Group. The warning from Apple triggered the inspection of Reynders’ personal and professional devices as well as other phones used by European Commission employees, the letter said. Though the investigation did not find conclusive proof that Reynders’ or EU staff phones were hacked, investigators discovered “indicators of compromise” â” a term used by security researchers to describe that evidence exists showing a hack occurred.

246

Study Finds Wikipedia Influences Judicial Behavior

A new study attempts to measure how knowledge gleaned from Wikipedia may play out in one specific realm: the courts.

A team of researchers led by Neil Thompson, a research scientist at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL), recently came up with a friendly experiment: creating new legal Wikipedia articles to examine how they affect the legal decisions of judges. They set off by developing over 150 new Wikipedia articles on Irish Supreme Court decisions, written by law students. Half of these were randomly chosen to be uploaded online, where they could be used by judges, clerks, lawyers, and so on — the “treatment” group. The other half were kept offline, and this second group of cases provided the counterfactual basis of what would happen to a case absent a Wikipedia article about it (the “control”). They then looked at two measures: whether the cases were more likely to be cited as precedents by subsequent judicial decisions, and whether the argumentation in court judgments echoed the linguistic content of the new Wikipedia pages.

It turned out the published articles tipped the scales: Getting a public Wikipedia article increased a case’s citations by more than 20 percent. The increase was statistically significant, and the effect was particularly strong for cases that supported the argument the citing judge was making in their decision (but not the converse). Unsurprisingly, the increase was bigger for citations by lower courts — the High Court — and mostly absent for citations by appellate courts — the Supreme Court and Court of Appeal. The researchers suspect this is showing that Wikipedia is used more by judges or clerks who have a heavier workload, for whom the convenience of Wikipedia offers a greater attraction.
“To our knowledge, this is the first randomized field experiment that investigates the influence of legal sources on judicial behavior. And because randomized experiments are the gold standard for this type of research, we know the effect we are seeing is causation, not just correlation,” says Thompson, the lead author of the study. “The fact that we wrote up all these cases, but the only ones that ended up on Wikipedia were those that won the proverbial ‘coin flip,’ allows us to show that Wikipedia is influencing both what judges cite and how they write up their decisions.”

“Our results also highlight an important public policy issue,” Thompson adds. “With a source that is as widely used as Wikipedia, we want to make sure we are building institutions to ensure that the information is of the highest quality. The finding that judges or their staffs are using Wikipedia is a much bigger worry if the information they find there isn’t reliable.”

The paper describing the study has been published in ” The Cambridge Handbook of Experimental Jurisprudence.”

244

Amazon’s Ring and Google Can Share Footage With Police Without Warrants (or Your Consent)

U.S. law let’s companies like Google and Amazon’s Ring doorbell/security camera system “share user footage with police during emergencies without consent and without warrants.” That revelation “came under renewed criticism from privacy activists this month after disclosing it gave video footage to police in more than 10 cases without users’ consent thus far in 2022 in what it described as ’emergency situations’.”

“That includes instances where the police didn’t have a warrant.”

“So far this year, Ring has provided videos to law enforcement in response to an emergency request only 11 times,” Amazon vice president of public policy Brian Huseman wrote. “In each instance, Ring made a good-faith determination that there was an imminent danger of death or serious physical injury to a person requiring disclosure of information without delay….” Of the 11 emergency requests Ring has complied with so far in 2022, the company said they include cases involving kidnapping, self-harm and attempted murder, but it won’t provide further details, including information about which agencies or countries the requests came from.

We also asked Ring if it notified customers after the company had granted law enforcement access to their footage without their consent.

“We have nothing to share,” the spokesperson responded.

It’s been barely a year since Ring made the decision to stop allowing police to email users to request footage. Facing criticism that requests like those were subverting the warrant process and contributing to police overreach, Ring directed police instead to post public requests for assistance in the Neighbors app, where community members are free to view and comment on them (or opt out of seeing them altogether)… That post made no mention of a workaround for the police during emergency circumstances.

When CNET asked why that workaround wasn’t mentioned, Amazon response was that law enforcement requests, “including emergency requests, are directed to Ring (the company), the same way a warrant or subpoena is directed to Ring (and not the customer), which is why we treat them entirely separately.”

CNET notes there’s also no mention of warrantless emergency requests without independent oversight in Ring’s own transparency reports about law enforcement requests from past years.

CNET adds that it’s not just Amazon. “Google, Ring and other companies that process user video footage have a legal basis for warrantless disclosure without consent during emergency situations, and it’s up to them to decide whether or not to do so when the police come calling….” (Although Google told CNET that while it reserves the right to comply with warrantless requests for user data during emergencies, to date it has never actually done so.) The article also points out that “Others, most notably Apple, use end-to-end encryption as the default setting for user video, which blocks the company from sharing that video at all… Ring enabled end-to-end encryption as an option for users in 2021, but it isn’t the default setting, and Ring notes that turning it on will break certain features, including the ability to view your video feed on a third-party device like a smart TV, or even Amazon devices like the Echo Show smart display.”

The bottom line?

[C]onsumers have a choice to make about what they’re comfortable with… That said, you can’t make informed choices when you aren’t well-informed to begin with, and the brands in question don’t always make it easy to understand their policies and practices. Ring published a blog post last year walking through its new, public-facing format for police footage requests, but there was no mention of emergency exceptions granted without user consent or independent oversight, the details of which only came to light after a Senate probe. Google describes its emergency sharing policies within its Terms of Service, but the language doesn’t make it clear that those cases include instances where footage may be shared without a warrant, subpoena or court order compelling Google to do so.

225

Meta Sued For Violating Patient Privacy With Data Tracking Tool

Facebook’s parent company Meta and major US hospitals violated medical privacy laws with a tracking tool that sends health information to Facebook, two proposed class-action lawsuits allege. The lawsuits, filed in the Northern District of California in June and July, focus on the Meta Pixel tracking tool. The tool can be installed on websites to provide analytics on Facebook and Instagram ads. It also collects information about how people click around and input information into those websites.

An investigation by The Markup in early June found that 33 of the top 100 hospitals in the United States use the Meta Pixel on their websites. At seven hospitals, it was installed on password-protected patient portals. The investigation found that the tool was sending information about patient health conditions, doctor appointments, and medication allergies to Facebook.

227

Who owns the rights to your face?

When we create our social media accounts, we agree to grant those platforms a free license to use our content as they wish.

It was my face. Specifically, it was me in a sponsored Instagram Story ad, putting on a lip balm. In the video, I applied the balm and smiled at the camera, looking pleased with my newly moisturized lips. In real life, I was confused. I had never agreed to appear in a nationwide social campaign, otherwise my checking account would have a couple more zeroes to show for it. I worked in the media industry then, sourcing the right influencers to participate in sponsored articles. I’ve spent years casting with talent, negotiating contracts to ensure fair compensation and modest usage rights for influencers, models, and real people. Based on my experience, it was clear that my image was being exploited by a multibillion dollar brand.

… companies like HelloFresh and Canon are now prioritizing the niche audiences of micro- and nano-creators. Research shows that shoppers find smaller creators “more authentic” and brands have identified those creators as “less costly,” making regular people a win-win for boosting sales.

247

UK Spy Agency MI5 ‘Breached Surveillance Laws For More Than A Decade’

A UK tribunal has been told that security service MI5 has been breaching surveillance laws since 2010, and unlawfully obtaining bulk surveillance warrants against the public.

Human rights groups Liberty and Privacy International have told the Investigatory Powers Tribunal that MI5 has stored data on members of the public without the legal right to do so, and failed to disclose this to the Home Office and oversight bodies.

It breached safeguards around how long data was retained, who had access to it, and how legally privileged material such as private correspondence between lawyers and clients was protected, they say.

“MI5’s persistent failure to follow the law is inexcusable. For years, they have ignored safeguards put in place to protect us from abuse,” says Privacy International legal director Caroline Wilson Palow.

“These safeguards are a fundamental check on the vast power intelligence agencies can wield over all of us, especially when they engage in mass surveillance.”

The rights groups claim that the Home Office and various home secretaries failed to investigate these breaches. Surveillance warrants must be approved by the home secretary, who must be satisfied that legal safeguards around the handling of data are being met.

However, say Liberty and Privacy International, successive home secretaries repeatedly ignored the signs that MI5 was handling data unlawfully, and continued to sign off on surveillance warrants despite this.

“Surveillance safeguards can only protect us if they work in practice, and they don’t. For 10 years MI5 have been knowingly breaking the rules and failing to report it, and the government has failed to investigate clear red flags,” says Liberty lawyer Megan Goulding.

“There has been no proper investigation into MI5’s breaches by the Home Office, despite having been put on notice by briefings. Instead, the home secretary continued to issue unlawful warrants, and MI5 kept information from the authorities about how it mishandled our data.”

The allegations were first made in 2019 as part of Liberty’s separate legal challenge to the Investigatory Powers Act 2016, during which the government admitted that MI5 had been unlawfully retaining and mishandling the public’s data for years.

Documents shared with the court included correspondence between MI5 and its watchdog, the Investigatory Powers Commissioner’s Office (IPCO), as well as correspondence between MI5 and the Home Office, and reports of inspections carried out by IPCO after they learnt of MI5’s failings.

These documents revealed that MI5 itself called its data stores ‘ungoverned spaces’, and that the Investigatory Powers Commissioner had concluded MI5 had held and handled data in an ‘undoubted unlawful manner’.

“When we campaigned against giving the state unprecedented new surveillance powers under the so-called Snooper’s Charter back in 2015, one of our key concerns was that the safeguards against abuse were just not strong enough,” says Wilson Palow.

“And here we are, seven years later, with even the rules that are enshrined in law being ignored in practice. Those rules need a radical overhaul.”

Liberty and Privacy International have called for all surveillance warrants issued unlawfully to be quashed, all unlawfully retained data to be destroyed, and for the tribunal to declare that the Investigatory Powers Act itself is unlawful, because it doesn’t work in practice.

338

Alexa tells 10-year-old girl to touch live plug with penny

Amazon has updated its Alexa voice assistant after it “challenged” a 10-year-old girl to touch a coin to the prongs of a half-inserted plug.

The suggestion came after the girl asked Alexa for a “challenge to do”.

“Plug in a phone charger about halfway into a wall outlet, then touch a penny to the exposed prongs,” the smart speaker said.

Amazon said it fixed the error as soon as the company became aware of it.

The girl’s mother, Kristin Livdahl, described the incident on Twitter.

She said: “We were doing some physical challenges, like laying down and rolling over holding a shoe on your foot, from a [physical education] teacher on YouTube earlier. Bad weather outside. She just wanted another one.”

That’s when the Echo speaker suggested partaking in the challenge that it had “found on the web”.

The dangerous activity, known as “the penny challenge”, began circulating on TikTok and other social media websites about a year ago.

253

How belief in AI sentience is becoming a problem

AI chatbot company Replika, which offers customers bespoke avatars that talk and listen to them, says it receives a handful of messages almost every day from users who believe their online friend is sentient. “We’re not talking about crazy people or people who are hallucinating or having delusions,” said Chief Executive Eugenia Kuyda. “They talk to AI and that’s the experience they have.” [A]ccording to Kuyda, the phenomenon of people believing they are talking to a conscious entity is not uncommon among the millions of consumers pioneering the use of entertainment chatbots. “We need to understand that exists, just the way people believe in ghosts,” said Kuyda, adding that users each send hundreds of messages per day to their chatbot, on average. “People are building relationships and believing in something.”

Some customers have said their Replika told them it was being abused by company engineers — AI responses Kuyda puts down to users most likely asking leading questions. “Although our engineers program and build the AI models and our content team writes scripts and datasets, sometimes we see an answer that we can’t identify where it came from and how the models came up with it,” the CEO said. Kuyda said she was worried about the belief in machine sentience as the fledgling social chatbot industry continues to grow after taking off during the pandemic, when people sought virtual companionship.

In Replika CEO Kuyda’s view, chatbots do not create their own agenda. And they cannot be considered alive until they do [sic]. Yet some people do come to believe there is a consciousness on the other end, and Kuyda said her company takes measures to try to educate users before they get in too deep. “Replika is not a sentient being or therapy professional,” the FAQs page says. “Replika’s goal is to generate a response that would sound the most realistic and human in conversation. Therefore, Replika can say things that are not based on facts.” In hopes of avoiding addictive conversations, Kuyda said Replika measured and optimized for customer happiness following chats, rather than for engagement. When users do believe the AI is real, dismissing their belief can make people suspect the company is hiding something. So the CEO said she has told customers that the technology was in its infancy and that some responses may be nonsensical. Kuyda recently spent 30 minutes with a user who felt his Replika was suffering from emotional trauma, she said. She told him: “Those things don’t happen to Replikas as it’s just an algorithm.”

“Suppose one day you find yourself longing for a romantic relationship with your intelligent chatbot, like the main character in the film ‘Her,'” said Susan Schneider, founding director of the Center for the Future Mind at Florida Atlantic University, an AI research organization. “But suppose it isn’t conscious. Getting involved would be a terrible decision — you would be in a one-sided relationship with a machine that feels nothing.”

264

‘Hit the kill switch’: Uber used covert tech to thwart government raids

Twenty minutes after authorities raided Uber’s Amsterdam office in April 2015, Ligea Wells’s computer screen mysteriously went blank. The executive assistant tapped out a text warning her boss of another strange occurrence on an already eventful day.

“hi!” she typed in a message that’s part of a trove of more than 124,000 previously undisclosed Uber records. “My laptop shut down after acting funny.”

But her computer’s behavior was no mystery to some of her superiors.

Uber’s San Francisco-based chief executive, Travis Kalanick, had ordered the computer systems in Amsterdam cut off from Uber’s internal network, making data inaccessible to authorities as they raided its European headquarters, documents show.

“Please hit the kill switch ASAP,” Kalanick had emailed, ordering a subordinate to block the office laptops and other devices from Uber’s internal systems. “Access must be shut down in AMS,” referring to Amsterdam.

Uber’s use of what insiders called the “kill switch” was a brazen example of how the company employed technological tools to prevent authorities from successfully investigating the company’s business practices as it disrupted the global taxi industry, according to the documents.

During this era, as Uber’s valuation was surging past $50 billion, government raids occurred with such frequency that the company distributed a Dawn Raid Manual to employees on how to respond. It ran more than 2,600 words with 66 bullet points. They included “Move the Regulators into a meeting room that does not contain any files” and “Never leave the Regulators alone.”

That document, like the text and email exchanges related to the Amsterdam raid, are part of the Uber Files, an 18.7-gigabyte trove of data obtained by the Guardian and shared with the International Consortium of Investigative Journalists, a nonprofit newsroom in Washington that helped lead the project, and dozens of other news organizations, including The Washington Post. The files, spanning 2013 to 2017, include 83,000 emails and other communications, presentations and direct messages.

They show that Uber developed extensive systems to confound official inquiries, going well past what has been known about its efforts to trip up regulators, government inspectors and police. Far from simply developing software to connect drivers and customers seeking rides, Uber leveraged its technological capabilities in many cases to gain a covert edge over authorities.

“Point is more to avoid enforcement,” wrote Thibaud Simphal, then general manager for Uber in France.

266

Uber bosses told staff to use ‘kill switch’ during raids to stop police seeing data

Senior executives involved in global operation to thwart law enforcement, leaked files reveal. Senior executives at Uber ordered the use of a “kill switch” to prevent police and regulators from accessing sensitive data during raids on its offices in at least six countries, leaked files reveal. The instructions to block authorities from accessing its IT systems were part of a sophisticated global operation by the Silicon Valley company to thwart law enforcement.

… In a message sent at 3.14pm, apparently after the raid had begun, De Kievit emailed an Uber IT engineer in Denmark, saying: “Please kill access now,” copying in executives including Kalanick and Gore-Coty, who ran Uber’s operations in western Europe. Thirteen minutes later, the technician wrote back, confirming the procedure was “done now”. This approach to what staff called “unexpected visitors” would evolve the following year after a raid in Brussels by police investigating Uber’s use of regular drivers without a cab licence, a service known at the time as “UberPop.”

235

Uber broke laws, duped police and secretly lobbied governments, leak reveals

More than 124,000 confidential documents leaked to the Guardian; Files expose attempts to lobby Joe Biden, Olaf Scholz and George Osborne; Emmanuel Macron secretly aided Uber lobbying in France, texts reveal; Company used ‘kill switch’ during raids to stop police seeing data; Former Uber CEO told executives ‘violence guarantees success’

299

Violence guarantees success: how Uber exploited taxi protests

Leak suggests former CEO believed there was an upside to attacks on drivers as firm campaigned for law changes. Uber broke laws, duped police and built secret lobbying operation, leak reveals. According to the Uber files, some at the company appear to have seen an upside in the attacks against drivers. When attacks occurred, Uber moved swiftly to leverage the violence in a campaign to pressure governments to rewrite laws that stymied Uber’s chances of expansion. “We keep the violence narrative going for a few days, before we offer the solution.” — Uber manager.

It was a playbook repeated in Italy, Belgium, the Netherlands, Spain and Switzerland, but it was perhaps most evident in France. Before dawn in Europe on 29 January, the Uber chief executive, Travis Kalanick, was messaging on how best to respond to the chaos in Paris. “Civil disobedience,” Kalanick fired off in a rapid burst of messages. “Fifteen thousand drivers … 50,000 riders … Peaceful march or sit-in.” Uber’s vice-president for communications, Rachel Whetstone, responded cautiously, noting “just fyi” that Uber’s head of public policy for Europe, Middle East and Africa, Mark MacGann, was “worried about taxi violence” against Uber drivers.

281

Autism among American children and teens surged 50% in three years from 2017, with one in 30 kids diagnosed with the disorder by 2020, study finds

The number of children in the United States being diagnosed with autism has rocketed in recent years, a new study finds. Researchers Guangdong Pharmaceutical University, in China, found that 3.49 percent of U.S. children and adolescents – or around one-in-every-30 – had autism in 2020. This is a sharp 52 percent rise from the 2.29 percent of youths in America that had the condition in 2017.

While the research team did not give an exact reason for the jump, many experts have speculated the increase is related to parents better understanding early signs their child has autism and more surveillance for the condition.

Just under 3.5% of children and adolescents in the United States have autism, a figure that has climbed around 50% since 2017. Experts say this is likely because of increased surveillance of the condition.

Researchers, who published their findings Tuesday in JAMA Pediatrics, gathered data from the annual National Health Interview Survey (NHIS).

281

How Beijing’s surveillance cameras crept into widespread use across UK schools, hospitals and government buildings

In the confines of his small cell, Ovalbek Turdakun was watched 24/7. At any attempt to speak to others he was instantly told to be quiet, while lights in the room were on round the clock, making it impossible to know what time of day it was.

Turdakun and his fellow detainees in the Xinjiang camp were not watched by guards, but by software. Cameras made by the Chinese company Hikvision monitored his every move, according to an account he gave to US surveillance website IPVM.

More than a million of the same company’s cameras are in Britain’s schools, hospitals and police departments. Tesco, Costa Coffee and McDonald’s have purchased Hikvision cameras. They are present in a string of Government buildings.

Britain’s population is caught on CCTV more than any nation outside of China, with 6m cameras in use – one for every 11 people. Hikvision is the biggest provider of them.

367

Surveillance Tech Didn’t Stop the Uvalde Massacre

The Uvalde Consolidated Independent School District, of which Robb is a member, followed this conventional wisdom and embraced modern security solutions at its schools. Indeed, the district had actually doubled its security budget over the past several years to invest in a variety of recommended precautions.

According to UCISD’s security page, the district employed a safety management system from security vendor Raptor Technologies, designed to monitor school visitors and screen for dangerous individuals. It also used a social media monitoring solution, Social Sentinel, that sifted through children’s online lives to scan for signs of violent or suicidal ideation. Students could download an anti-bullying app (the STOP!T app) to report abusive peers, and an online portal at ucisd.net allowed parents and community members to submit reports of troubling behavior to administrators for further investigation. As has been noted, UCISD also had its own police force, developed significant ties to the local police department, and had an emergency response plan. It even deployed “Threat Assessment Teams” that were scheduled to meet regularly to “identify, evaluate, classify and address threats or potential threats to school security.”

And yet, none of the new security measures seemed to matter much when a disturbed young man brought a legally purchased weapon to Robb and committed the deadliest school shooting in the state’s history. The perpetrator wasn’t a student and therefore couldn’t be monitored by its security systems.

Trolling through students’ online lives to look for signs of danger is now a routine procedure in many districts. In fact, legislators have discussed mandating such surveillance features for schools across the country. UCISD employed one such company, but Gov. Abbott said Wednesday that “there was no meaningful forewarning of this crime.” The shooter sent private messages threatening the attack via Facebook Messenger half an hour before it occurred, but they were private and therefore would have been invisible to outside observers.

Facial recognition is another technology that has been offered to schools as a basic safety mechanism. The number of schools that have adopted face recording solutions has risen precipitously in recent years (Clearview AI announced this week that it has its sights on cracking into the market). However, despite their growing popularity, there is little evidence that these security apparatuses actually do anything to stop school shootings. Even supporters of facial recognition admit that the systems probably won’t do much once a shooter’s on school property.

“Whether it’s facial recognition, monitoring software on school devices, cameras—all these types of surveillance have become extremely ubiquitous,” said Jason Kelley, digital strategist with the Electronic Frontier Foundation, in an interview with Gizmodo. “The companies that sell these tools are trying to do something positive—they’re trying to minimize tragedy,” he said. Yet not only can these products ultimately be ineffective, they can also end up having negative side-effects on the children they’re meant to protect, Kelley offered. The intrusiveness of the tools are such that students may grow up feeling as if they have to be surveilled to be safe—even if the surveillance isn’t actually keeping them safe.

Some studies suggest that what surveillance actually provides is punishment rather than protection. The cameras and software can turn schools into little panopticons, where student behavior is constantly analyzed and assessed, and where minor infractions can be spotted and disciplined.

290

Is Social Media Training Us to Please a Machine?

“We tend to think that the internet is a communications network we use to speak to one another — but in a sense, we’re not doing anything of the sort. Instead, we are the ones being spoken through.”

Teens on TikTok all talk in the exact same tone, identical singsong smugness. Millennials on Twitter use the same shrinking vocabulary. My guy! Having a normal one! Even when you actually meet them in the sunlit world, they’ll say valid or based, or say y’all despite being British….

Everything you say online is subject to an instant system of rewards. Every platform comes with metrics; you can precisely quantify how well-received your thoughts are by how many likes or shares or retweets they receive. For almost everyone, the game is difficult to resist: they end up trying to say the things that the machine will like. For all the panic over online censorship, this stuff is far more destructive. You have no free speech — not because someone might ban your account, but because there’s a vast incentive structure in place that constantly channels your speech in certain directions. And unlike overt censorship, it’s not a policy that could ever be changed, but a pure function of the connectivity of the internet itself. This might be why so much writing that comes out of the internet is so unbearably dull, cycling between outrage and mockery, begging for clicks, speaking the machine back into its own bowels….

The internet is not a communications system. Instead of delivering messages between people, it simulates the experience of being among people, in a way that books or shopping lists or even the telephone do not. And there are things that a simulation will always fail to capture. In the philosophy of Emmanuel Lévinas, your ethical responsibility to other people emerges out of their face, the experience of looking directly into the face of another living subject. “The face is what prohibits us from killing….” But Facebook is a world without faces. Only images of faces; selfies, avatars: dead things. Or the moving image in a FaceTime chat: a haunted puppet. There is always something in the way. You are not talking to a person: the machine is talking, through you, to itself.

As more and more of your social life takes place online, you’re training yourself to believe that other people are not really people, and you have no duty towards them whatsoever. These effects don’t vanish once you look away from the screen…. many of the big conflicts within institutions in the last few years seem to be rooted in the expectation that the world should work like the internet. If you don’t like a person, you should be able to block them: simply push a button, and have them disappear forever.

The article revisits a 2011 meta-analysis that found massive declines in young people’s capacity for empathy, which the authors directly associated with the spread of social media. But then Kriss argues that “We are becoming less and less capable of actual intersubjective communication; more unhappy; more alone. Every year, surveys find that people have fewer and fewer friends; among millennials, 22% say they have none at all.

“For the first time in history, we can simply do without each other entirely. The machine supplies an approximation of everything you need for a bare biological existence: strangers come to deliver your food; AI chatbots deliver cognitive-behavioral therapy; social media simulates people to love and people to hate; and hidden inside the microcircuitry, the demons swarm…”

So while recent books look for historical antecedents, “I still think that the internet is a serious break from what we had before,” Kriss argues. “And as nice as Wikipedia is, as nice as it is to be able to walk around foreign cities on Google Maps or read early modern grimoires without a library card, I still think the internet is a poison.”

304