A guide to deleting Zoom and replacing it with privacy and security alternatives such as Signal or Jitsi.
A guide to deleting Zoom and replacing it with privacy and security alternatives such as Signal or Jitsi.
A guide to deleting your Google account.
The challenges to humanity posed by the digital future, the first detailed examination of the unprecedented form of power called “surveillance capitalism,” and the quest by powerful corporations to predict and control our behaviour. Shoshana Zuboff provides startling insights into the phenomenon that she has named surveillance capitalism. The stakes could not be higher: a global architecture of behavior modification threatens human nature in the twenty-first century just as industrial capitalism disfigured the natural world in the twentieth. Zuboff vividly brings to life the consequences as surveillance capitalism advances from Silicon Valley into every economic sector. Vast wealth and power are accumulated in ominous new “behavioral futures markets,” where predictions about our behavior are bought and sold, and the production of goods and services is subordinated to a new “means of behavioral modification.”
The threat has shifted from a totalitarian Big Brother state to a ubiquitous digital architecture: a “Big Other” operating in the interests of surveillance capital. Here is the crucible of an unprecedented form of power marked by extreme concentrations of knowledge and free from democratic oversight. Zuboff’s comprehensive and moving analysis lays bare the threats to twenty-first century society: a controlled “hive” of total connection that seduces with promises of total certainty for maximum profit — at the expense of democracy, freedom, and our human future.
With little resistance from law or society, surveillance capitalism is on the verge of dominating the social order and shaping the digital future–if we let it.
A comprehensive resource to help you delete Facebook.
The Israeli spyware firm has signed contracts with Bahrain, Oman and Saudi Arabia. Despite its claims, NSO exercises little control over use of its software, which dictatorships can use to monitor dissidents.
The Israeli firm NSO Group Technologies, whose software is used to hack into cellphones, has in the past few years sold its Pegasus spyware for hundreds of millions of dollars to the United Arab Emirates and other Persian Gulf States, where it has been used to monitor anti-regime activists, with the encouragement and the official mediation of the Israeli government.
NSO is one of the most active Israeli companies in the Gulf, and its Pegasus 3 software permits law enforcement authorities to hack into cellphones, copy their contents and sometimes even to control their camera and audio recording capabilities. The company’s vulnerability researchers work to identify security threats and can hack into mobile devices independently (without the aid of an unsuspecting user, who, for example, clicks on a link).
A study analyzing patterns in online comments found that liberals and conservatives use different words to express similar ideas.
Researchers at Carnegie Mellon University collected more than 86.6 million comments from more than 6.5 million users on 200,000 YouTube videos, then analyzed them using an AI technique normally employed to translate between two languages.
The researchers found that people on opposing sides of the political divide often use different words to express similar ideas. For instance, the term “mask” among liberal commenters is roughly equivalent to the term “muzzle” for conservatives. Similar pairings were seen for “liberals” and “libtards” as well as “solar” and “fossil.”
“We are practically speaking different languages—that’s a worrisome thing,” KhudaBukhsh says. “If ‘mask’ translates to ‘muzzle,’ you immediately know that there is a huge debate surrounding masks and freedom of speech.”
In the case of politically tinged comments, the researchers found that different words occupy a similar place in the lexicon of each community. The paper, which has been posted online but is not yet peer reviewed, looked at comments posted beneath the videos on four channels spanning left- and right-leaning US news—MSNBC, CNN, Fox News, and OANN.
KhudaBukhsh says social networks might use techniques like the one his team developed to build bridges between warring communities. A network could surface comments that avoid contentious or “foreign” terms, instead showing ones that represent common ground, he suggests. “Go to any social media platform; it has become so toxic, and it’s almost like there is no known interaction” between users with different political viewpoints, he says.
But Morteza Dehghani, an associate professor at the University of Southern California who studies social media using computational methods, finds the approach problematic. He notes that the Carnegie Mellon paper considers “BLM” (Black Lives Matter) and “ALM” (all lives matter) a “translatable” pair, akin to “mask” and “muzzle.”
“BLM and ALM are not translations of each other,” he says. “One makes salient centuries of slavery, abuse, racism, discrimination, and fights for justice, while the other one tries to erase this history.”
Dehghani says it would be a mistake to use computational methods that oversimplify issues and lack nuance. “What we need is not machine translation,” he says. “What we need is perspective-taking and explanation—two things that AI algorithms are notoriously bad at.”
For many of us, that unsettling feeling of being watched is all too real. After all, we live in a world of mass surveillance, from facial recognition to online tracking – governments and tech companies are harvesting intimate information about billions of people. Targeted surveillance is slightly different. It’s the use of technology to spy on specific people.
You may think this is fine, because aren’t people only targeted when they’ve done something wrong? Think again.
From Mexico to the Middle East, governments are wielding a range of sophisticated cyber-tools to unlawfully spy on their critics. A seemingly innocuous missed call, a personalized text message or unknowingly redirected to malicious website for a split second, and without you being aware the spyware is installed.
The people targeted are often journalists, bloggers and activists (including Amnesty’s own staff) voicing inconvenient truths. They may be exposing corrupt deals, demanding electoral reform, or promoting the right to privacy. Their defence of human rights puts them at odds with their governments. Rather than listen, governments prefer to shut them down. And when governments attack the people who are defending our rights, then we’re all at risk.
The authorities use clever cyber-attacks to access users’ phones and computers. Once in, they can find out who their contacts are, their passwords, their social media habits, their texts. They can record conversations. They can find out everything about that person, tap into their network, find out about their work, and destroy it. Since 2017, Amnesty’s own research has uncovered attacks like these in Egypt, India, Morocco, Pakistan, Saudi Arabia, UAE, Qatar and Uzbekistan.
Remember, the users we’re talking about are human rights activists, among them journalists, bloggers, poets, teachers and so many others who bravely take a stand for justice, equality and freedom. They take these risks so we don’t have to. But voicing concerns about government conduct and policy makes them unpopular with the authorities. So much so that governments resort to dirty tricks, smearing activists and re-branding them as criminals and terrorists.
Some of the most insidious attacks on human rights defenders have been waged using spyware manufactured by NSO Group. A major player in the shadowy surveillance industry, they specialise in cyber-surveillance tools.
NSO is responsible for Pegasus malware, a powerful programme that can turn on your phone’s microphone and camera without your knowledge. It can also access your emails and texts, track your keystrokes and collect data about you. The worst thing is you don’t have to do anything to trigger it – Pegasus can be installed without you ever knowing.
NSO say they’re creating technology that helps governments fight terrorism and crime. But as early as 2018, when one of our own staff was targeted through WhatsApp, our Security Lab discovered a network of more than 600 suspicious websites owned by NSO that could be used to spy on journalists and activists around the world. We were not wrong. In 2019, thousands of people received scam WhatsApp call, leading WhatsApp to later sue NSO. More recently we documented the cases of Moroccan activists who had been similarly targeted.
An AI tool that “removes” items of clothing from photos has targeted more than 100,000 women, some of whom appear to be under the age of 18.
The still images of nude women are generated by an AI that “removes” items of clothing from a non-nude photo. Every day the bot sends out a gallery of new images to an associated Telegram channel which has almost 25,000 subscribers. The sets of images are frequently viewed more 3,000 times. A separate Telegram channel that promotes the bot has more than 50,000 subscribers.
Some of the images produced by the bot are glitchy, but many could pass for genuine. “It is maybe the first time that we are seeing these at a massive scale,” says Giorgio Patrini, CEO and chief scientist at deepfake detection company Sensity, which conducted the research. The company is publicizing its findings in a bid to pressure services hosting the content to remove it, but it is not publicly naming the Telegram channels involved.
The actual number of women targeted by the deepfake bot is likely much higher than 104,000. Sensity was only able to count images shared publicly, and the bot gives people the option to generate photos privately. “Most of the interest for the attack is on private individuals,” Patrini says. “The very large majority of those are for people that we cannot even recognize.”
As a result, it is likely very few of the women who have been targeted know that the images exist. The bot and a number of Telegram channels linked to it are primarily Russian-language but also offer English-language translations. In a number of cases, the images created appear to contain girls who are under the age of 18, Sensity adds, saying it has no way to verify this but has informed law enforcement of their existence.
Unlike other nonconsensual explicit deepfake videos, which have racked up millions of views on porn websites, these images require no technical knowledge to create. The process is automated and can be used by anyone—it’s as simple as uploading an image to any messaging service.