Archives July 2022

UK Spy Agency MI5 ‘Breached Surveillance Laws For More Than A Decade’

A UK tribunal has been told that security service MI5 has been breaching surveillance laws since 2010, and unlawfully obtaining bulk surveillance warrants against the public.

Human rights groups Liberty and Privacy International have told the Investigatory Powers Tribunal that MI5 has stored data on members of the public without the legal right to do so, and failed to disclose this to the Home Office and oversight bodies.

It breached safeguards around how long data was retained, who had access to it, and how legally privileged material such as private correspondence between lawyers and clients was protected, they say.

“MI5’s persistent failure to follow the law is inexcusable. For years, they have ignored safeguards put in place to protect us from abuse,” says Privacy International legal director Caroline Wilson Palow.

“These safeguards are a fundamental check on the vast power intelligence agencies can wield over all of us, especially when they engage in mass surveillance.”

The rights groups claim that the Home Office and various home secretaries failed to investigate these breaches. Surveillance warrants must be approved by the home secretary, who must be satisfied that legal safeguards around the handling of data are being met.

However, say Liberty and Privacy International, successive home secretaries repeatedly ignored the signs that MI5 was handling data unlawfully, and continued to sign off on surveillance warrants despite this.

“Surveillance safeguards can only protect us if they work in practice, and they don’t. For 10 years MI5 have been knowingly breaking the rules and failing to report it, and the government has failed to investigate clear red flags,” says Liberty lawyer Megan Goulding.

“There has been no proper investigation into MI5’s breaches by the Home Office, despite having been put on notice by briefings. Instead, the home secretary continued to issue unlawful warrants, and MI5 kept information from the authorities about how it mishandled our data.”

The allegations were first made in 2019 as part of Liberty’s separate legal challenge to the Investigatory Powers Act 2016, during which the government admitted that MI5 had been unlawfully retaining and mishandling the public’s data for years.

Documents shared with the court included correspondence between MI5 and its watchdog, the Investigatory Powers Commissioner’s Office (IPCO), as well as correspondence between MI5 and the Home Office, and reports of inspections carried out by IPCO after they learnt of MI5’s failings.

These documents revealed that MI5 itself called its data stores ‘ungoverned spaces’, and that the Investigatory Powers Commissioner had concluded MI5 had held and handled data in an ‘undoubted unlawful manner’.

“When we campaigned against giving the state unprecedented new surveillance powers under the so-called Snooper’s Charter back in 2015, one of our key concerns was that the safeguards against abuse were just not strong enough,” says Wilson Palow.

“And here we are, seven years later, with even the rules that are enshrined in law being ignored in practice. Those rules need a radical overhaul.”

Liberty and Privacy International have called for all surveillance warrants issued unlawfully to be quashed, all unlawfully retained data to be destroyed, and for the tribunal to declare that the Investigatory Powers Act itself is unlawful, because it doesn’t work in practice.

Alexa tells 10-year-old girl to touch live plug with penny

Amazon has updated its Alexa voice assistant after it “challenged” a 10-year-old girl to touch a coin to the prongs of a half-inserted plug.

The suggestion came after the girl asked Alexa for a “challenge to do”.

“Plug in a phone charger about halfway into a wall outlet, then touch a penny to the exposed prongs,” the smart speaker said.

Amazon said it fixed the error as soon as the company became aware of it.

The girl’s mother, Kristin Livdahl, described the incident on Twitter.

She said: “We were doing some physical challenges, like laying down and rolling over holding a shoe on your foot, from a [physical education] teacher on YouTube earlier. Bad weather outside. She just wanted another one.”

That’s when the Echo speaker suggested partaking in the challenge that it had “found on the web”.

The dangerous activity, known as “the penny challenge”, began circulating on TikTok and other social media websites about a year ago.

How belief in AI sentience is becoming a problem

AI chatbot company Replika, which offers customers bespoke avatars that talk and listen to them, says it receives a handful of messages almost every day from users who believe their online friend is sentient. “We’re not talking about crazy people or people who are hallucinating or having delusions,” said Chief Executive Eugenia Kuyda. “They talk to AI and that’s the experience they have.” [A]ccording to Kuyda, the phenomenon of people believing they are talking to a conscious entity is not uncommon among the millions of consumers pioneering the use of entertainment chatbots. “We need to understand that exists, just the way people believe in ghosts,” said Kuyda, adding that users each send hundreds of messages per day to their chatbot, on average. “People are building relationships and believing in something.”

Some customers have said their Replika told them it was being abused by company engineers — AI responses Kuyda puts down to users most likely asking leading questions. “Although our engineers program and build the AI models and our content team writes scripts and datasets, sometimes we see an answer that we can’t identify where it came from and how the models came up with it,” the CEO said. Kuyda said she was worried about the belief in machine sentience as the fledgling social chatbot industry continues to grow after taking off during the pandemic, when people sought virtual companionship.

In Replika CEO Kuyda’s view, chatbots do not create their own agenda. And they cannot be considered alive until they do [sic]. Yet some people do come to believe there is a consciousness on the other end, and Kuyda said her company takes measures to try to educate users before they get in too deep. “Replika is not a sentient being or therapy professional,” the FAQs page says. “Replika’s goal is to generate a response that would sound the most realistic and human in conversation. Therefore, Replika can say things that are not based on facts.” In hopes of avoiding addictive conversations, Kuyda said Replika measured and optimized for customer happiness following chats, rather than for engagement. When users do believe the AI is real, dismissing their belief can make people suspect the company is hiding something. So the CEO said she has told customers that the technology was in its infancy and that some responses may be nonsensical. Kuyda recently spent 30 minutes with a user who felt his Replika was suffering from emotional trauma, she said. She told him: “Those things don’t happen to Replikas as it’s just an algorithm.”

“Suppose one day you find yourself longing for a romantic relationship with your intelligent chatbot, like the main character in the film ‘Her,'” said Susan Schneider, founding director of the Center for the Future Mind at Florida Atlantic University, an AI research organization. “But suppose it isn’t conscious. Getting involved would be a terrible decision — you would be in a one-sided relationship with a machine that feels nothing.”

‘Hit the kill switch’: Uber used covert tech to thwart government raids

Twenty minutes after authorities raided Uber’s Amsterdam office in April 2015, Ligea Wells’s computer screen mysteriously went blank. The executive assistant tapped out a text warning her boss of another strange occurrence on an already eventful day.

“hi!” she typed in a message that’s part of a trove of more than 124,000 previously undisclosed Uber records. “My laptop shut down after acting funny.”

But her computer’s behavior was no mystery to some of her superiors.

Uber’s San Francisco-based chief executive, Travis Kalanick, had ordered the computer systems in Amsterdam cut off from Uber’s internal network, making data inaccessible to authorities as they raided its European headquarters, documents show.

“Please hit the kill switch ASAP,” Kalanick had emailed, ordering a subordinate to block the office laptops and other devices from Uber’s internal systems. “Access must be shut down in AMS,” referring to Amsterdam.

Uber’s use of what insiders called the “kill switch” was a brazen example of how the company employed technological tools to prevent authorities from successfully investigating the company’s business practices as it disrupted the global taxi industry, according to the documents.

During this era, as Uber’s valuation was surging past $50 billion, government raids occurred with such frequency that the company distributed a Dawn Raid Manual to employees on how to respond. It ran more than 2,600 words with 66 bullet points. They included “Move the Regulators into a meeting room that does not contain any files” and “Never leave the Regulators alone.”

That document, like the text and email exchanges related to the Amsterdam raid, are part of the Uber Files, an 18.7-gigabyte trove of data obtained by the Guardian and shared with the International Consortium of Investigative Journalists, a nonprofit newsroom in Washington that helped lead the project, and dozens of other news organizations, including The Washington Post. The files, spanning 2013 to 2017, include 83,000 emails and other communications, presentations and direct messages.

They show that Uber developed extensive systems to confound official inquiries, going well past what has been known about its efforts to trip up regulators, government inspectors and police. Far from simply developing software to connect drivers and customers seeking rides, Uber leveraged its technological capabilities in many cases to gain a covert edge over authorities.

“Point is more to avoid enforcement,” wrote Thibaud Simphal, then general manager for Uber in France.

Uber bosses told staff to use ‘kill switch’ during raids to stop police seeing data

Senior executives involved in global operation to thwart law enforcement, leaked files reveal. Senior executives at Uber ordered the use of a “kill switch” to prevent police and regulators from accessing sensitive data during raids on its offices in at least six countries, leaked files reveal. The instructions to block authorities from accessing its IT systems were part of a sophisticated global operation by the Silicon Valley company to thwart law enforcement.

… In a message sent at 3.14pm, apparently after the raid had begun, De Kievit emailed an Uber IT engineer in Denmark, saying: “Please kill access now,” copying in executives including Kalanick and Gore-Coty, who ran Uber’s operations in western Europe. Thirteen minutes later, the technician wrote back, confirming the procedure was “done now”. This approach to what staff called “unexpected visitors” would evolve the following year after a raid in Brussels by police investigating Uber’s use of regular drivers without a cab licence, a service known at the time as “UberPop.”

Uber broke laws, duped police and secretly lobbied governments, leak reveals

More than 124,000 confidential documents leaked to the Guardian; Files expose attempts to lobby Joe Biden, Olaf Scholz and George Osborne; Emmanuel Macron secretly aided Uber lobbying in France, texts reveal; Company used ‘kill switch’ during raids to stop police seeing data; Former Uber CEO told executives ‘violence guarantees success’

Violence guarantees success: how Uber exploited taxi protests

Leak suggests former CEO believed there was an upside to attacks on drivers as firm campaigned for law changes. Uber broke laws, duped police and built secret lobbying operation, leak reveals. According to the Uber files, some at the company appear to have seen an upside in the attacks against drivers. When attacks occurred, Uber moved swiftly to leverage the violence in a campaign to pressure governments to rewrite laws that stymied Uber’s chances of expansion. “We keep the violence narrative going for a few days, before we offer the solution.” — Uber manager.

It was a playbook repeated in Italy, Belgium, the Netherlands, Spain and Switzerland, but it was perhaps most evident in France. Before dawn in Europe on 29 January, the Uber chief executive, Travis Kalanick, was messaging on how best to respond to the chaos in Paris. “Civil disobedience,” Kalanick fired off in a rapid burst of messages. “Fifteen thousand drivers … 50,000 riders … Peaceful march or sit-in.” Uber’s vice-president for communications, Rachel Whetstone, responded cautiously, noting “just fyi” that Uber’s head of public policy for Europe, Middle East and Africa, Mark MacGann, was “worried about taxi violence” against Uber drivers.

Autism among American children and teens surged 50% in three years from 2017, with one in 30 kids diagnosed with the disorder by 2020, study finds

The number of children in the United States being diagnosed with autism has rocketed in recent years, a new study finds. Researchers Guangdong Pharmaceutical University, in China, found that 3.49 percent of U.S. children and adolescents – or around one-in-every-30 – had autism in 2020. This is a sharp 52 percent rise from the 2.29 percent of youths in America that had the condition in 2017.

While the research team did not give an exact reason for the jump, many experts have speculated the increase is related to parents better understanding early signs their child has autism and more surveillance for the condition.

Just under 3.5% of children and adolescents in the United States have autism, a figure that has climbed around 50% since 2017. Experts say this is likely because of increased surveillance of the condition.

Researchers, who published their findings Tuesday in JAMA Pediatrics, gathered data from the annual National Health Interview Survey (NHIS).