Archives October 2020

Police Are Using Facial Recognition For Minor Crimes, ‘Because They Can’

In a recent court filing, the New York police department noted that it’s turned to facial recognition in more than 22,000 cases in the last three years. “Even though the NYPD claims facial recognition is only used for serious crimes, the numbers tell a different story,” said Albert Fox Cahn, the executive director of the Surveillance Technology Oversight Project. “As facial recognition continues to grow, it’s being routinely deployed for everything from shoplifting to graffiti.”

Asked for comment, an NYPD spokeswoman pointed to a 2019 opinion article by police commissioner James O’Neill titled “How Facial Recognition Makes You Safer.” In the piece, O’Neill talked about how facial recognition had been used to make arrests in murder, robbery and rape cases, but he didn’t disclose how often it was used for low-level crimes. The department’s facial recognition policy, established in March, allows the technology to be used for any crime, no matter the severity. Without any limits, police have more frequently used the technology for petty thefts than the dangerous crimes, privacy advocates say. Before Amazon put a moratorium on police use of its Rekognition face-identifying software, the program was used in a $12 shoplifting case in Oregon in 2018…

Without any limits, police can use facial recognition however they please, and in many cases, arrested suspects don’t even know that the flawed technology was used… Attorneys representing protesters in Miami didn’t know that police used facial recognition in their arrests, according to an NBC Miami report. Police used facial recognition software in a $50 drug dealing case in Florida in 2016 but made no mention of it in the arrest report.

The article also notes that as recently as this Tuesday, Hoan Ton-That, the CEO of facial recognition startup Clearview AI “said it isn’t the company’s responsibility to make sure its technology is being properly used by its thousands of police partners.

“Though the company has its own guidelines, Ton-That said Clearview AI wouldn’t be enforcing them, saying that ‘it’s not our job to set the policy as a tech company…'”

Surveillance Startup Used Own Cameras To Harass Coworkers

Verkada, a fast-growing Silicon Valley surveillance startup, equips its offices in downtown San Mateo, California, with its own state-of-the-art security cameras. Last year, a sales director on the company’s sales team abused their access to these cameras to take and post photos of colleagues in a Slack channel called #RawVerkadawgz where they made sexually explicit jokes about women who worked at the company, according to a report in IPVM, which Motherboard independently verified and obtained more information about. “Face match… find me a squirt,” the sales director wrote in the company Slack channel in August 2019, according to one screenshot obtained by Motherboard. The comment was posted along with a series of photos of employees’ faces captured with the office’s surveillance system which were patched together using a Verkada facial recognition feature. “Face search,” as it’s called, can pinpoint an individual in a sea of faces. The pinpointed face, in this instance, belonged to a Verkada employee, her mouth wide open. In addition to verifying the incident with three sources who worked at Verkada at the time, Motherboard compared the format of the images posted to those included in Verkada’s publicly available demo videos to verify that they were indeed captured by the company’s surveillance cameras.

deletezoom.com

A guide to deleting Zoom and replacing it with privacy and security alternatives such as Signal or Jitsi.

deletegoogle.com

A guide to deleting your Google account.

The Age of Surveillance Capitalism

Shoshana Zuboff (2019)

The challenges to humanity posed by the digital future, the first detailed examination of the unprecedented form of power called “surveillance capitalism,” and the quest by powerful corporations to predict and control our behaviour. Shoshana Zuboff provides startling insights into the phenomenon that she has named surveillance capitalism. The stakes could not be higher: a global architecture of behavior modification threatens human nature in the twenty-first century just as industrial capitalism disfigured the natural world in the twentieth. Zuboff vividly brings to life the consequences as surveillance capitalism advances from Silicon Valley into every economic sector. Vast wealth and power are accumulated in ominous new “behavioral futures markets,” where predictions about our behavior are bought and sold, and the production of goods and services is subordinated to a new “means of behavioral modification.”

The threat has shifted from a totalitarian Big Brother state to a ubiquitous digital architecture: a “Big Other” operating in the interests of surveillance capital. Here is the crucible of an unprecedented form of power marked by extreme concentrations of knowledge and free from democratic oversight. Zuboff’s comprehensive and moving analysis lays bare the threats to twenty-first century society: a controlled “hive” of total connection that seduces with promises of total certainty for maximum profit — at the expense of democracy, freedom, and our human future.

With little resistance from law or society, surveillance capitalism is on the verge of dominating the social order and shaping the digital future–if we let it.

deletefacebook.com

A comprehensive resource to help you delete Facebook.

With Israel’s Encouragement, NSO Sold Spyware to UAE and Other Gulf States

The Israeli spyware firm has signed contracts with Bahrain, Oman and Saudi Arabia. Despite its claims, NSO exercises little control over use of its software, which dictatorships can use to monitor dissidents.

The Israeli firm NSO Group Technologies, whose software is used to hack into cellphones, has in the past few years sold its Pegasus spyware for hundreds of millions of dollars to the United Arab Emirates and other Persian Gulf States, where it has been used to monitor anti-regime activists, with the encouragement and the official mediation of the Israeli government.

NSO is one of the most active Israeli companies in the Gulf, and its Pegasus 3 software permits law enforcement authorities to hack into cellphones, copy their contents and sometimes even to control their camera and audio recording capabilities. The company’s vulnerability researchers work to identify security threats and can hack into mobile devices independently (without the aid of an unsuspecting user, who, for example, clicks on a link).

The Left and the Right Speak Different Languages—Literally

A study analyzing patterns in online comments found that liberals and conservatives use different words to express similar ideas.

Researchers at Carnegie Mellon University collected more than 86.6 million comments from more than 6.5 million users on 200,000 YouTube videos, then analyzed them using an AI technique normally employed to translate between two languages.

The researchers found that people on opposing sides of the political divide often use different words to express similar ideas. For instance, the term “mask” among liberal commenters is roughly equivalent to the term “muzzle” for conservatives. Similar pairings were seen for “liberals” and “libtards” as well as “solar” and “fossil.”

“We are practically speaking different languages—that’s a worrisome thing,” KhudaBukhsh says. “If ‘mask’ translates to ‘muzzle,’ you immediately know that there is a huge debate surrounding masks and freedom of speech.”

In the case of politically tinged comments, the researchers found that different words occupy a similar place in the lexicon of each community. The paper, which has been posted online but is not yet peer reviewed, looked at comments posted beneath the videos on four channels spanning left- and right-leaning US news—MSNBC, CNN, Fox News, and OANN.

KhudaBukhsh says social networks might use techniques like the one his team developed to build bridges between warring communities. A network could surface comments that avoid contentious or “foreign” terms, instead showing ones that represent common ground, he suggests. “Go to any social media platform; it has become so toxic, and it’s almost like there is no known interaction” between users with different political viewpoints, he says.

But Morteza Dehghani, an associate professor at the University of Southern California who studies social media using computational methods, finds the approach problematic. He notes that the Carnegie Mellon paper considers “BLM” (Black Lives Matter) and “ALM” (all lives matter) a “translatable” pair, akin to “mask” and “muzzle.”

“BLM and ALM are not translations of each other,” he says. “One makes salient centuries of slavery, abuse, racism, discrimination, and fights for justice, while the other one tries to erase this history.”

Dehghani says it would be a mistake to use computational methods that oversimplify issues and lack nuance. “What we need is not machine translation,” he says. “What we need is perspective-taking and explanation—two things that AI algorithms are notoriously bad at.”

When is targeted surveillance wrong?

For many of us, that unsettling feeling of being watched is all too real. After all, we live in a world of mass surveillance, from facial recognition to online tracking – governments and tech companies are harvesting intimate information about billions of people. Targeted surveillance is slightly different. It’s the use of technology to spy on specific people.

You may think this is fine, because aren’t people only targeted when they’ve done something wrong? Think again.

From Mexico to the Middle East, governments are wielding a range of sophisticated cyber-tools to unlawfully spy on their critics. A seemingly innocuous missed call, a personalized text message or unknowingly redirected to malicious website for a split second, and without you being aware the spyware is installed.

The people targeted are often journalists, bloggers and activists (including Amnesty’s own staff) voicing inconvenient truths. They may be exposing corrupt deals, demanding electoral reform, or promoting the right to privacy. Their defence of human rights puts them at odds with their governments. Rather than listen, governments prefer to shut them down. And when governments attack the people who are defending our rights, then we’re all at risk.

The authorities use clever cyber-attacks to access users’ phones and computers. Once in, they can find out who their contacts are, their passwords, their social media habits, their texts. They can record conversations. They can find out everything about that person, tap into their network, find out about their work, and destroy it. Since 2017, Amnesty’s own research has uncovered attacks like these in Egypt, India, Morocco, Pakistan, Saudi Arabia, UAE, Qatar and Uzbekistan.

Remember, the users we’re talking about are human rights activists, among them journalists, bloggers, poets, teachers and so many others who bravely take a stand for justice, equality and freedom. They take these risks so we don’t have to. But voicing concerns about government conduct and policy makes them unpopular with the authorities. So much so that governments resort to dirty tricks, smearing activists and re-branding them as criminals and terrorists.

Some of the most insidious attacks on human rights defenders have been waged using spyware manufactured by NSO Group. A major player in the shadowy surveillance industry, they specialise in cyber-surveillance tools.

NSO is responsible for Pegasus malware, a powerful programme that can turn on your phone’s microphone and camera without your knowledge. It can also access your emails and texts, track your keystrokes and collect data about you. The worst thing is you don’t have to do anything to trigger it – Pegasus can be installed without you ever knowing.

NSO say they’re creating technology that helps governments fight terrorism and crime. But as early as 2018, when one of our own staff was targeted through WhatsApp, our Security Lab discovered a network of more than 600 suspicious websites owned by NSO that could be used to spy on journalists and activists around the world. We were not wrong. In 2019, thousands of people received scam WhatsApp call, leading WhatsApp to later sue NSO. More recently we documented the cases of Moroccan activists who had been similarly targeted.

A Deepfake Porn Bot Is Being Used to Abuse Thousands of Women

An AI tool that “removes” items of clothing from photos has targeted more than 100,000 women, some of whom appear to be under the age of 18.

The still images of nude women are generated by an AI that “removes” items of clothing from a non-nude photo. Every day the bot sends out a gallery of new images to an associated Telegram channel which has almost 25,000 subscribers. The sets of images are frequently viewed more 3,000 times. A separate Telegram channel that promotes the bot has more than 50,000 subscribers.

Some of the images produced by the bot are glitchy, but many could pass for genuine. “It is maybe the first time that we are seeing these at a massive scale,” says Giorgio Patrini, CEO and chief scientist at deepfake detection company Sensity, which conducted the research. The company is publicizing its findings in a bid to pressure services hosting the content to remove it, but it is not publicly naming the Telegram channels involved.

The actual number of women targeted by the deepfake bot is likely much higher than 104,000. Sensity was only able to count images shared publicly, and the bot gives people the option to generate photos privately. “Most of the interest for the attack is on private individuals,” Patrini says. “The very large majority of those are for people that we cannot even recognize.”

As a result, it is likely very few of the women who have been targeted know that the images exist. The bot and a number of Telegram channels linked to it are primarily Russian-language but also offer English-language translations. In a number of cases, the images created appear to contain girls who are under the age of 18, Sensity adds, saying it has no way to verify this but has informed law enforcement of their existence.

Unlike other nonconsensual explicit deepfake videos, which have racked up millions of views on porn websites, these images require no technical knowledge to create. The process is automated and can be used by anyone—it’s as simple as uploading an image to any messaging service.

Police complain about surveillance “going dark” but they are frequently breaking encryption far more than previously known

In a new Apple ad, a man on a city bus announces he has just shopped for divorce lawyers. Then a woman recites her credit card number through a megaphone in a park. “Some things shouldn’t be shared,” the ad says, “iPhone helps keep it that way.” Apple has built complex encryption into iPhones and made the devices’ security central to its marketing pitch. That, in turn, has angered law enforcement. Officials from the F.B.I. director to rural sheriffs have argued that encrypted phones stifle their work to catch and convict dangerous criminals. They have tried to force Apple and Google to unlock suspects’ phones, but the companies say they can’t. In response, the authorities have put their own marketing spin on the problem. Law enforcement, they say, is “going dark.” Yet new data reveals a twist to the encryption debate that undercuts both sides: Law enforcement officials across the nation regularly break into encrypted smartphones.

That is because at least 2,000 law enforcement agencies in all 50 states now have tools to get into locked, encrypted phones and extract their data, according to years of public records collected in a report by Upturn, a Washington nonprofit that investigates how the police use technology. At least 49 of the 50 largest U.S. police departments have the tools, according to the records, as do the police and sheriffs in small towns and counties across the country, including Buckeye, Ariz.; Shaker Heights, Ohio; and Walla Walla, Wash. And local law enforcement agencies that don’t have such tools can often send a locked phone to a state or federal crime lab that does. With more tools in their arsenal, the authorities have used them in an increasing range of cases, from homicides and rapes to drugs and shoplifting, according to the records, which were reviewed by The New York Times. Upturn researchers said the records suggested that U.S. authorities had searched hundreds of thousands of phones over the past five years. While the existence of such tools has been known for some time, the records show that the authorities break into phones far more than previously understood — and that smartphones, with their vast troves of personal data, are not as impenetrable as Apple and Google have advertised. While many in law enforcement have argued that smartphones are often a roadblock to investigations, the findings indicate that they are instead one of the most important tools for prosecutions.

Five Eyes Governments, India, and Japan Make New Call For Encryption Backdoors

Members of the intelligence-sharing alliance Five Eyes, along with government representatives for Japan and India, have published a statement over the weekend calling on tech companies to come up with a solution for law enforcement to access end-to-end encrypted communications. From a report:
The statement is the alliance’s latest effort to get tech companies to agree to encryption backdoors. The Five Eyes alliance, comprised of the US, the UK, Canada, Australia, and New Zealand, have made similar calls to tech giants in 2018 and 2019, respectively. Just like before, government officials claim tech companies have put themselves in a corner by incorporating end-to-end encryption (E2EE) into their products. If properly implemented, E2EE lets users have secure conversations — may them be chat, audio, or video — without sharing the encryption key with the tech companies. Representatives from the seven governments argue that the way E2EE encryption is currently supported on today’s major tech platforms prohibits law enforcement from investigating crime rings, but also the tech platforms themselves from enforcing their own terms of service. Signatories argue that “particular implementations of encryption technology” are currently posing challenges to law enforcement investigations, as the tech platforms themselves can’t access some communications and provide needed data to investigators.

Citigroup Tech Executive Unmasked as Major QAnon ‘High Priest’

QAnon’s biggest news hub was run by a senior vice president at Citigroup, the American multinational investment bank and financial services company Citigroup. Jason Gelinas worked in the company’s technology department, where he led an AI project and oversaw a team of software developers, according to Bloomberg. [Alternate URL] He was married with kids and had a comfortable house in a New Jersey suburb. According to those who know him, Gelinas was a pleasant guy who was into normal stuff: Game of Thrones, recreational soccer, and so on. Things did get weird, though, when politics came up…

The movement had been contained mostly to the internet’s trollish fringes until around the time Gelinas came along. In 2018, while doing his job at Citi, he created, as an anonymous side project, a website dedicated to bringing QAnon to a wider audience — soccer moms, white-collar workers, and other “normies,” as he boasted. By mid-2020, the site was drawing 10 million visitors each month, according to the traffic-tracking firm SimilarWeb, and was credited by researchers with playing a key role in what might be the most unlikely political story in a year full of unlikely political stories: A Citigroup executive helped turn an obscure and incoherent cult into an incoherent cult with mainstream political implications…

The need to spread the word beyond core users led to the creation of aggregator sites, which would scrape the Q drops and repost them in friendlier environs after determining authenticity. (The ability to post as Q has repeatedly been compromised, and some posts have had to be culled from the canon.) This task, Gelinas once told a friend, could be his calling from God…. His intention, as he later explained on Patreon, the crowdfunding website widely used by musicians, podcasters, and other artists, was to make memes, which are harder to police than tweets or Facebook text posts. “Memes are awesome,” Gelinas wrote. “They also bypass big tech censorship.” (Social media companies are, at least in theory, opposed to disinformation, and QAnon posts sometimes get removed. On Oct. 6, Facebook banned QAnon-affiliated groups and pages from the service….) The site wasn’t just a repository of QAnon posts; Gelinas served as an active co-author in the movement’s growing mythology… Gelinas claimed he was the No. 2 figure in the movement, behind only Q, according to a friend, and began to dream about turning his QAnon hobby into his main gig…

By now, his site’s growth had attracted an enemy. Frederick Brennan, a 26-year-old polymath with a rare bone disease, had decided to unmask him. Brennan was a reformed troll. He’d created 8chan, but he had a change of heart after the man responsible for the 2019 mass shootings at two mosques in Christchurch, New Zealand, posted his manifesto on the forum in advance and inscribed 8chan memes on the weapons he used to kill 56 people… He referred to Gelinas’s site in a tweet as “the main vector for Q radicalization.”

Days after Gelinas was outed as the man running the site, Citigroup “had put him on administrative leave and his name was removed from the company’s internal directory. He was later terminated.”

One Solar/Wind Energy Company Is Now More Valuable Than Exxon Mobil

The world’s biggest provider of wind and solar energy is now more valuable than the giant oil company Exxon Mobil, “once the largest public company on Earth,” reports Bloomberg:

NextEra ended Wednesday with market value of $145 billion, topping Exxon’s $142 billion… NextEra has emerged as the world’s most valuable utility, largely by betting big on renewables, especially wind. Exxon has seen its fortunes shift in the other direction as electric vehicles become more widespread and the fight against climate change takes on more urgency. “People believe that renewable energy is a growth story and that oil and gas is a declining story,” said Jigar Shah, co-founder of the green financier Generate.

NextEra had about 18 gigawatts of wind and solar farms at the end of last year, enough to power 13.5 million homes. And it’s expanding significantly, with contracts to add another 12 gigawatts of renewables. Its shares have surged more than 20% this year. At the same time, Exxon’s shares have tumbled more than 50% as the pandemic quashed global demand for fuels. The company’s second-quarter loss was its worst of the modern era and, in August, Exxon was ejected from the Dow Jones Industrial Average. The company was worth $525 billion in 2007, more than three times its current value.

Peter McNally, an energy expert at research firm Third Bridge, tells ExtremeTech that it all comes down to the cheaper price of renewable energy.

“Alternative power is now getting competitive with traditional forms of electricity, coal and natural gas fired generation.”

London Installed AI Cameras To Monitor Social Distancing, Lockdown Restrictions

Artificial Intelligence cameras are being used in London and other cities in the UK to monitor social distancing. The sensors were initially developed by Vivacity to track the flow of traffic, cyclists and pedestrians and monitor how roads are being used. But when the country went into lockdown in March, Vivacity added on an extra feature to the AI scanners so it could register the distance between pedestrians. This data is shared in a monthly report with the Government.

Vivacity Labs said they have more than 1,000 sensors installed across the UK, in cities including London, Manchester, Oxford, Cambridge and Nottingham. Chief Operating Officer at Vivacity Peter Mildon told BBC Radio Kent on Wednesday that the data is potentially “useful for informing policy decisions” regarding lockdown measures. He stressed that the cameras are not CCTV but that they operate as a data collating device rather than a camera that stores footage. “They are not recording any footage, they are not streaming any footage and no one is actually watching it,” he said.

Mr Mildon added: “We’re creating a set of statistics on how behavior is changing in terms of how people are staying close together or apart. And it is that data that is then useful for informing policy decisions on whether there should be a two meter rule or a one meter plus rule or whether local lockdown measures are having the impact they are envisioned to.”

Google is Giving Data To Police Based on Search Keywords, Court Docs Show

There are few things as revealing as a person’s search history, and police typically need a warrant on a known suspect to demand that sensitive information. But a recently unsealed court document found that investigators can request such data in reverse order by asking Google to disclose everyone who searched a keyword rather than for information on a known suspect.

In August, police arrested Michael Williams, an associate of singer and accused sex offender R. Kelly, for allegedly setting fire to a witness’ car in Florida. Investigators linked Williams to the arson, as well as witness tampering, after sending a search warrant to Google that requested information on “users who had searched the address of the residence close in time to the arson.”

The July court filing was unsealed on Tuesday. Detroit News reporter Robert Snell tweeted about the filing after it was unsealed. Court documents showed that Google provided the IP addresses of people who searched for the arson victim’s address, which investigators tied to a phone number belonging to Williams. Police then used the phone number records to pinpoint the location of Williams’ device near the arson, according to court documents. The original warrant sent to Google is still sealed, but the report provides another example of a growing trend of data requests to the search engine giant in which investigators demand data on a large group of users rather than a specific request on a single suspect. “This ‘keyword warrant’ evades the Fourth Amendment checks on police surveillance,” said Albert Fox Cahn, the executive director of the Surveillance Technology Oversight Project. “When a court authorizes a data dump of every person who searched for a specific term or address, it’s likely unconstitutional.”

EU Lawmakers Ask Jeff Bezos Whether Amazon Spies on Politicians

A cross-party group of MEPs has written to Amazon’s chief executive, Jeff Bezos, demanding information on the online retailer’s monitoring of trade union activists and politicians in response to deleted job postings that described unions as “threats.” The letter, from 37 members of the European parliament, said they were concerned Amazon deliberately targeted workers seeking to organise, and also questioned whether the company had “spied” on politicians. Trade unions last week called for a European commission investigation into whether Amazon’s monitoring of workers was legal, after two job posts on the US company’s website advertised “intelligence analyst” roles that referred to “labor organizing threats against the company.” The advertisements, aimed at candidates with law enforcement or military experience, also mentioned the monitoring of “hostile political leaders.” The posts grouped organised labour with hate groups and terrorism, two illegal activities, and listed French and Spanish language skills among the preferred qualifications, suggesting European workers could be targets. Amazon deleted the posts after Vice News first reported on them.

These Shocking Charts Show Just How Much Richer Billionaires Have Gotten Since Covid

The world’s wealthiest individuals have become even richer during the coronavirus pandemic as the prices of financial assets have been supported by widespread policy intervention while employment and wages, well, not so much.

The richest five billionaires, Jeff Bezos, Bill Gates, Mark Zuckerberg, Warren Buffett, and Elon Musk, saw a 59% increase in their total wealth, from $358 billion to $569 billion.

Exxon’s Plan for Surging Carbon Emissions Revealed in Leaked Documents

Exxon Mobil Corp. had plans to increase annual carbon-dioxide emissions by as much as the output of the entire nation of Greece, an analysis of internal documents reviewed by Bloomberg shows, setting one of the largest corporate emitters against international efforts to slow the pace of warming.

The drive to expand both fossil-fuel production and planet-warming pollution has come at a time when some of Exxon’s rivals, such as BP Plc and Royal Dutch Shell Plc, are moving to curb oil and zero-out emissions. Exxon’s own assessment of its $210 billion investment strategy shows yearly emissions rising 17% by 2025, according to internal projections.

The internal estimates reflect only a small portion of Exxon’s total contribution to climate change. Greenhouse gases from direct operations, such as those measured by Exxon, typically account for a fifth of the total at a large oil company; most emissions come from customers burning fuel in vehicles or other end uses, which the Exxon documents don’t account for.

That means the full climate impact of Exxon’s growth strategy would likely be five times the company’s estimate—or about 100 million tons of additional carbon dioxide—had the company accounted for so-called Scope 3 emissions. If its plans are realized, Exxon would add to the atmosphere the annual emissions of a small, developed nation, or 26 coal-fired power plants.

More than 14m tonnes of plastic believed to be at the bottom of the ocean

At least 14m tonnes of plastic pieces less than 5mm wide are likely sitting at the bottom of the world’s oceans, according to an estimate based on new research.

Analysis of ocean sediments from as deep as 3km suggests there could be more than 30 times as much plastic at the bottom of the world’s ocean than there is floating at the surface.