Surveillance capitalism is everywhere. But it’s not the result of some wrong turn or a rogue abuse of corporate power — it’s the system working as intended. This is the subject of Cory Doctorow’s new book.
The malware that French law enforcement deployed en masse onto Encrochat devices, a large encrypted phone network using Android phones, had the capability to harvest “all data stored within the device,” and was expected to include chat messages, geolocation data, usernames, passwords, and more, according to a document obtained by Motherboard. From the report: The document adds more specifics around the law enforcement hack and subsequent takedown of Encrochat earlier this year. Organized crime groups across Europe and the rest of the world heavily used the network before its seizure, in many cases to facilitate large scale drug trafficking. The operation is one of, if not the, largest law enforcement mass hacking operation to date, with investigators obtaining more than a hundred million encrypted messages. “The NCA has been collaborating with the Gendarmerie on Encrochat for over 18 months, as the servers are hosted in France. The ultimate objective of this collaboration has been to identify and exploit any vulnerability in the service to obtain content,” the document reads, referring to both the UK’s National Crime Agency and one of the national police forces of France. As well as the geolocation, chat messages, and passwords, the law enforcement malware also told infected Encrochat devices to provide a list of WiFi access points near the device, the document reads.
Police across Canada are increasingly using controversial algorithms to predict where crimes could occur, who might go missing, and to help them determine where they should patrol, despite fundamental human rights concerns, a new report has found.
To Surveil and Predict: A Human Rights Analysis of Algorithmic Policing in Canada is the result of a joint investigation by the University of Toronto’s International Human Rights Program (IHRP) and Citizen Lab. It details how, in the words of the report’s authors, “law enforcement agencies across Canada have started to use, procure, develop, or test a variety of algorithmic policing methods,” with potentially dire consequences for civil liberties, privacy and other Charter rights, the authors warn.
The report breaks down how police are using or considering the use of algorithms for several purposes including predictive policing, which uses historical police data to predict where crime will occur in the future. Right now in Canada, police are using algorithms to analyze data about individuals to predict who might go missing, with the goal of one day using the technology in other areas of the criminal justice system. Some police services are using algorithms to automate the mass collection and analysis of public data, including social media posts, and to apply facial recognition to existing mugshot databases for investigative purposes. “Algorithmic policing technologies are present or under consideration throughout Canada in the forms of both predictive policing and algorithmic surveillance tools.” the report reads
More than 2,400 police agencies have entered contracts with Clearview AI, a controversial facial recognition firm, according to comments made by Clearview AI CEO Hoan Ton-That in an interview with Jason Calacanis on YouTube.
The hour-long interview references an investigation by The New York Times published in January, which detailed how Clearview AI scraped data from sites including Facebook, YouTube, and Venmo to build its database. The scale of that database and the methods used to construct it were already controversial before the summer of protests against police violence. “It’s an honor to be at the center of the debate now and talk about privacy,” Ton-That says in the interview, going on to call the Times investigation “actually extremely fair.” “Since then, there’s been a lot of controversy, but fundamentally, this is such a great tool for society,” Ton-That says.
Ton-That also gave a few more details on how the business runs. Clearview is paid depending on how many licenses a client adds, among other factors, but Ton-That describes the licenses as “pretty inexpensive, compared to what’s come previously” in his interview. Ton-That ballparks Clearview’s fees as $2,000 a year for each officer with access. According to Ton-That, Clearview AI is primarily used by detectives.
NROL-44 is a huge signals intelligence, or SIGINT, satellite, says David Baker, a former NASA scientist who worked on Apollo and Shuttle missions, has written numerous books, including U.S. Spy Satellites and is editor of SpaceFlight magazine. “SIGINT satellites are the core of national government, military security satellites. They are massive things for which no private company has any purpose,” says Baker… “It weighs more than five tons. It has a huge parabolic antenna which unfolds to a diameter of more than 100 meters in space, and it will go into an equatorial plane of Earth at a distance of about 36,000 kilometers (22,000 miles),” says Baker…
Spy satellites “hoover up” of hundreds of thousands of cell phone calls or scour the dark web for terrorist activity. “The move from wired communication to digital and wireless is a godsend to governments because you can’t cut into wires from a satellite, but you can literally pick up cell phone towers which are radiating this stuff into the atmosphere. It takes a massive antenna, but you’re able to sit over one spot and listen to all the communications traffic,” says Baker…
Some people worry about congestion in space, or satellites bumping into each other, and the threat of a collision causing space debris that could damage other satellites or knock out communications networks. But that may have benefits, too — little bits of spy satellite can hide in all that mess and connect wirelessly to create a “virtual satellite,” says Baker. “There are sleeper satellites which look like debris. You launch all the parts separately and disperse them into various orbits. So, you would have sensors on one bit, an amplifier on another bit, a processor on another, and they’ll be orbiting relatively immersed in space debris.”
“Space debris is very good for the space defense industry,” says Baker, “because the more there is, the more you can hide in it.”
Researchers from Mozilla report in a study that web browsing histories (the lists of user visited websites) are uniquely identifying users (PDF). In their study that was the case for 99% of users. Treating web browsing histories like fingerprints, the researchers analysed how the users can be reidentified just based on the coarsened list of user-visited websites.
In doing so they upheld and confirmed a previous study from 2012, prompting the author of the original study to say that web browsing histories are now personal data subject to privacy regulations like the GDPR.
Sensitivity of web browsing history data questions the laws allowing ISPs to sell web browsing histories.
The now-vindicated author of the 2012 study added this emphatic note in their blog post. “Web browsing histories are personal data. Deal with it.”
If we’re going to break Big Tech’s death grip on our digital lives, we’re going to have to fight monopolies. That may sound pretty mundane and old-fashioned, something out of the New Deal era, while ending the use of automated behavioral modification feels like the plotline of a really cool cyberpunk novel… But trustbusters once strode the nation, brandishing law books, terrorizing robber barons, and shattering the illusion of monopolies’ all-powerful grip on our society. The trustbusting era could not begin until we found the political will — until the people convinced politicians they’d have their backs when they went up against the richest, most powerful men in the world. Could we find that political will again…?
That’s the good news: With a little bit of work and a little bit of coalition building, we have more than enough political will to break up Big Tech and every other concentrated industry besides. First we take Facebook, then we take AT&T/WarnerMedia. But here’s the bad news: Much of what we’re doing to tame Big Tech instead of breaking up the big companies also forecloses on the possibility of breaking them up later… Allowing the platforms to grow to their present size has given them a dominance that is nearly insurmountable — deputizing them with public duties to redress the pathologies created by their size makes it virtually impossible to reduce that size. Lather, rinse, repeat: If the platforms don’t get smaller, they will get larger, and as they get larger, they will create more problems, which will give rise to more public duties for the companies, which will make them bigger still.
We can work to fix the internet by breaking up Big Tech and depriving them of monopoly profits, or we can work to fix Big Tech by making them spend their monopoly profits on governance. But we can’t do both. We have to choose between a vibrant, open internet or a dominated, monopolized internet commanded by Big Tech giants that we struggle with constantly to get them to behave themselves…
Big Tech wired together a planetary, species-wide nervous system that, with the proper reforms and course corrections, is capable of seeing us through the existential challenge of our species and planet. Now it’s up to us to seize the means of computation, putting that electronic nervous system under democratic, accountable control.
With “free, fair, and open tech” we could then tackle our other urgent problems “from climate change to social change” — all with collective action, Doctorow argues. And “The internet is how we will recruit people to fight those fights, and how we will coordinate their labor.
“Tech is not a substitute for democratic accountability, the rule of law, fairness, or stability — but it’s a means to achieve these things.”
“Imagine a world where wireless devices are as small as a grain of salt,” writes futurist Bernard Marr in Forbes, describing a technology being researched by companies like IBM, General Electric, and Cisco. “These miniaturized devices have sensors, cameras and communication mechanisms to transmit the data they collect back to a base in order to process.
“Today, you no longer have to imagine it: microelectromechanical systems (MEMS), often called motes, are real and they very well could be coming to a neighborhood near you. Whether this fact excites or strikes fear in you it’s good to know what it’s all about.” Outfitted with miniature sensors, MEMS can detect everything from light to vibrations to temperature. With an incredible amount of power packed into its small size, MEMS combine sensing, an autonomous power supply, computing and wireless communication in a space that is typically only a few millimeters in volume. With such a small size, these devices can stay suspended in an environment just like a particle of dust. They can:
– Collect data including acceleration, stress, pressure, humidity, sound and more from sensors
– Process the data with what amounts to an onboard computer system
– Store the data in memory
– Wirelessly communicate the data to the cloud, a base or other MEMs
Since the components that make up these devices are 3D printed as one piece on a commercially available 3D printer, an incredible amount of complexity can be handled and some previous manufacturing barriers that restricted how small you can make things were overcome. The optical lenses that are created for these miniaturized sensors can achieve the finest quality images.
The potential of smart dust to collect information about any environment in incredible detail could impact plenty of things in a variety of industries from safety to compliance to productivity. It’s like multiplying the internet of things technology millions or billions of times over.
Albion College, a small liberal arts school in Michigan, said in June it would allow its nearly 1,500 students to return to campus for the new academic year starting in August. Lectures would be limited in size and the semester would finish by Thanksgiving rather than December. The school said it would test both staff and students upon their arrival to campus and throughout the academic year. But less than two weeks before students began arriving on campus, the school announced it would require them to download and install a contact-tracing app called Aura, which it says will help it tackle any coronavirus outbreak on campus.
There’s a catch. The app is designed to track students’ real-time locations around the clock, and there is no way to opt out. The Aura app lets the school know when a student tests positive for COVID-19. It also comes with a contact-tracing feature that alerts students when they have come into close proximity with a person who tested positive for the virus. But the feature requires constant access to the student’s real-time location, which the college says is necessary to track the spread of any exposure. The school’s mandatory use of the app sparked privacy concerns and prompted parents to launch a petition to make using the app optional.
A newly released document shows the U.S. Secret Service went through a controversial social media surveillance company to purchase the location information on American’s movements, no warrant necessary. Babel Street is a shadowy organization that offers a product called Locate X that is reportedly used to gather anonymized location data from a host of popular apps that users have unwittingly installed on their phones. When we say “unwittingly,” we mean that not everyone is aware that random innocuous apps are often bundling and anonymizing their data to be sold off to the highest bidder.
Back in March, Protocol reported that U.S. Customs and Border Protection had a contract to use Locate X and that sources inside the secretive company described the system’s capabilities as allowing a user “to draw a digital fence around an address or area, pinpoint mobile devices that were within that area, and see where else those devices have traveled, going back months.” Protocol’s sources also said that the Secret Service had used the Locate X system in the course of investigating a large credit card skimming operation. On Monday, Motherboard confirmed the investigation when it published an internal Secret Service document it acquired through a Freedom of Information Act (FOIA) request. (You can view the full document here.) The document covers a relationship between Secret Service and Babel Street from September 28, 2017, to September 27, 2018. In the past, the Secret Service has reportedly used a separate social media surveillance product from Babel Street, and the newly-released document totals fees paid after the addition of the Locate X license as $1,999,394.
Privacy.net exists to help guard your privacy and security online. It highlights some of the violations of privacy by governments, corporations and hackers that most of the general public either ignore or simply are not aware of.
The iPhone that Moroccan journalist Omar Radi used to contact his sources also allowed his government to spy on him (and at least two other journalists), reports the Toronto Star, citing new research from Amnesty International.
Their government could read every email, text and website visited; listen to every phone call and watch every video conference; download calendar entries, monitor GPS coordinates, and even turn on the camera and microphone to see and hear where the phone was at any moment.
Yet Radi was trained in encryption and cyber security. He hadn’t clicked on any suspicious links and didn’t have any missed calls on WhatsApp — both well-documented ways a cell phone can be hacked. Instead, a report published Monday by Amnesty International shows Radi was targeted by a new and frighteningly stealthy technique. All he had to do was visit one website. Any website.
Forensic evidence gathered by Amnesty International on Radi’s phone shows that it was infected by “network injection,” a fully automated method where an attacker intercepts a cellular signal when it makes a request to visit a website. In milliseconds, the web browser is diverted to a malicious site and spyware code is downloaded that allows remote access to everything on the phone. The browser then redirects to the intended website and the user is none the wiser.
Tracking entire populations to combat the pandemic now could open the doors to more invasive forms of government snooping later.
In South Korea, government agencies are harnessing surveillance-camera footage, smartphone location data and credit card purchase records to help trace the recent movements of coronavirus patients and establish virus transmission chains. In Lombardy, Italy, the authorities are analyzing location data transmitted by citizens’ mobile phones to determine how many people are obeying a government lockdown order and the typical distances they move every day. About 40 percent are moving around “too much,” an official recently said. In Israel, the country’s internal security agency is poised to start using a cache of mobile phone location data — originally intended for counterterrorism operations — to try to pinpoint citizens who may have been exposed to the virus.
As countries around the world race to contain the pandemic, many are deploying digital surveillance tools as a means to exert social control, even turning security agency technologies on their own civilians. Health and law enforcement authorities are understandably eager to employ every tool at their disposal to try to hinder the virus — even as the surveillance efforts threaten to alter the precarious balance between public safety and personal privacy on a global scale. Yet ratcheting up surveillance to combat the pandemic now could permanently open the doors to more invasive forms of snooping later. It is a lesson Americans learned after the terrorist attacks of Sept. 11, 2001, civil liberties experts say. Nearly two decades later, law enforcement agencies have access to higher-powered surveillance systems, like fine-grained location tracking and facial recognition — technologies that may be repurposed to further political agendas like anti-immigration policies. Civil liberties experts warn that the public has little recourse to challenge these digital exercises of state power.
Banjo, an artificial intelligence firm that works with police used a shadow company to create an array of Android and iOS apps that looked innocuous but were specifically designed to secretly scrape social media. The news signifies an abuse of data by a government contractor, with Banjo going far beyond what companies which scrape social networks usually do. Banjo created a secret company named Pink Unicorn Labs, according to three former Banjo employees, with two of them adding that the company developed the apps. This was done to avoid detection by social networks, two of the former employees said.
Three of the apps created by Pink Unicorn Labs were called “One Direction Fan App,” “EDM Fan App,” and “Formula Racing App.” Motherboard found these three apps on archive sites and downloaded and analyzed them, as did an independent expert. The apps — which appear to have been originally compiled in 2015 and were on the Play Store until 2016 according to Google — outwardly had no connection to Banjo, but an analysis of its code indicates connections to the company. This aspect of Banjo’s operation has some similarities with the Cambridge Analytica scandal, with multiple sources comparing the two incidents. […] The company has not publicly explained how it specifically scrapes social media apps. Motherboard found the apps developed by Pink Unicorn Labs included code mentioning signing into Facebook, Twitter, Instagram, Russian social media app VK, FourSquare, Google Plus, and Chinese social network Sina Weibo. The apps could have scraped social media “by sending the saved login token to a server for Banjo to use later, or by using the app itself to scrape information,” reports Motherboard, noting that it’s not entirely clear which method Banjo used. “Motherboard found that the apps when opened made web requests to the domain ‘pulapi.com,’ likely referring to Pink Unicorn Labs, but the site that would provide a response to the app is currently down.”
Last weekend, Motherboard reported that Banjo signed a $20.7 million contract with Utah in 2019 that granted the company access to the state’s traffic, CCTV, and public safety cameras. “Banjo promises to combine that input with a range of other data such as satellites and social media posts to create a system that it claims alerts law enforcement of crimes or events in real-time.”
“So this is creepy,” writes a Forbes cybersecurity reporter, saying Airbnb “has put aside the stories of hosts secretly spying on guests” to promote a new line of devices Forbes calls “surveillance bugs to make sure guests behave.”
“… we’re hurtling toward a world where almost everything we own is monitoring us in some way, and I’m not sure that’s actually going to be a safer world.”
Amazon-owned home security camera company Ring has fired employees for improperly accessing Ring users’ video data, Motherboard reported Wednesday, citing a letter the company wrote to Senators. The news highlights a risk across many different tech companies: employees may abuse access granted as part of their jobs to look at customer data or information. In Ring’s case this data can be particularly sensitive though, as customers often put the cameras inside their home. “We are aware of incidents discussed below where employees violated our policies,” the letter from Ring, dated January 6th, reads. “Over the last four years, Ring has received four complaints or inquiries regarding a team member’s access to Ring video data,” it continues. Ring explains that although each of these people were authorized to view video data, their attempted access went beyond what they needed to access for their job.
As governments and companies invest more in security networks, hundreds of millions more surveillance cameras will be watching the world in 2021, mostly in China, according to a new report. The report, from industry researcher IHS Market, to be released Thursday, said the number of cameras used for surveillance would climb above 1 billion by the end of 2021. That would represent an almost 30% increase from the 770 million cameras today. China would continue to account for a little over half the total. Fast-growing, populous nations such as India, Brazil and Indonesia would also help drive growth in the sector, the report said. IHS analyst Oliver Philippou said government programs to implement widespread video surveillance to monitor the public would be the biggest catalyst for the growth in China. City surveillance also was driving demand elsewhere.
Police officers who download videos captured by homeowners’ Ring doorbell cameras can keep them forever and share them with whomever they’d like without providing evidence of a crime, the Amazon-owned firm told a lawmaker this month… Police in those communities can use Ring software to request up to 12 hours of video from anyone within half a square mile of a suspected crime scene, covering a 45-day time span, wrote Brian Huseman, Amazon’s vice president of public policy. Police are required to include a case number for the crime they are investigating, but not any other details or evidence related to the crime or their request.
Sen. Edward Markey, D-Mass., said in a statement that Ring’s policies showed that the company had failed to enact basic safeguards to protect Americans’ privacy. “Connected doorbells are well on their way to becoming a mainstay of American households, and the lack of privacy and civil rights protections for innocent residents is nothing short of chilling,” he said. “If you’re an adult walking your dog or a child playing on the sidewalk, you shouldn’t have to worry that Ring’s products are amassing footage of you and that law enforcement may hold that footage indefinitely or share that footage with any third parties.”
While Ring tells users not to film public roads are sidewalks, Ring isn’t enforcing that, according to the article. Amazon argues that that’s ultimately the user’s responsibility.
And will their cameras start using facial recognition algorithms? Amazon answers that that feature is “contemplated but unreleased,” though they add that “We do frequently innovate based on customer demand,” and point out that other competing security cameras are already offering facial-recognition.
Smart TVs are like regular television sets but with an internet connection. With the advent and growth of Netflix, Hulu and other streaming services, most saw internet-connected televisions as a cord-cutter’s dream. But like anything that connects to the internet, it opens up smart TVs to security vulnerabilities and hackers. Not only that, many smart TVs come with a camera and a microphone. But as is the case with most other internet-connected devices, manufacturers often don’t put security as a priority. That’s the key takeaway from the FBI’s Portland field office, which just ahead of some of the biggest shopping days of the year posted a warning on its website about the risks that smart TVs pose. “Beyond the risk that your TV manufacturer and app developers may be listening and watching you, that television can also be a gateway for hackers to come into your home. A bad cyber actor may not be able to access your locked-down computer directly, but it is possible that your unsecured TV can give him or her an easy way in the backdoor through your router,” wrote the FBI. The FBI warned that hackers can take control of your unsecured smart TV and in worst cases, take control of the camera and microphone to watch and listen in.
More than 60% of Americans think it’s impossible to go through daily life without being tracked by companies or the government, according to a new Pew Research study. It’s not just that Americans (correctly) think companies are collecting their data. They don’t like it. About 69% of Americans are skeptical that companies will use their private information in a way they’re comfortable with, while 79% don’t believe that companies will come clean if they misuse the information. When it comes to who they trust, there are differences by race. About 73% of black Americans, for instance, are at least a little worried about what law enforcement knows about them, compared with 56% of white Americans. But among all respondents, more than 80% were concerned about what social-media sites and advertisers might know. Despite these concerns, more than 80% of Americans feel they have no control over how their information is collected.