Privacy.net exists to help guard your privacy and security online. It highlights some of the violations of privacy by governments, corporations and hackers that most of the general public either ignore or simply are not aware of.
The iPhone that Moroccan journalist Omar Radi used to contact his sources also allowed his government to spy on him (and at least two other journalists), reports the Toronto Star, citing new research from Amnesty International.
Their government could read every email, text and website visited; listen to every phone call and watch every video conference; download calendar entries, monitor GPS coordinates, and even turn on the camera and microphone to see and hear where the phone was at any moment.
Yet Radi was trained in encryption and cyber security. He hadn’t clicked on any suspicious links and didn’t have any missed calls on WhatsApp — both well-documented ways a cell phone can be hacked. Instead, a report published Monday by Amnesty International shows Radi was targeted by a new and frighteningly stealthy technique. All he had to do was visit one website. Any website.
Forensic evidence gathered by Amnesty International on Radi’s phone shows that it was infected by “network injection,” a fully automated method where an attacker intercepts a cellular signal when it makes a request to visit a website. In milliseconds, the web browser is diverted to a malicious site and spyware code is downloaded that allows remote access to everything on the phone. The browser then redirects to the intended website and the user is none the wiser.
Tracking entire populations to combat the pandemic now could open the doors to more invasive forms of government snooping later.
In South Korea, government agencies are harnessing surveillance-camera footage, smartphone location data and credit card purchase records to help trace the recent movements of coronavirus patients and establish virus transmission chains. In Lombardy, Italy, the authorities are analyzing location data transmitted by citizens’ mobile phones to determine how many people are obeying a government lockdown order and the typical distances they move every day. About 40 percent are moving around “too much,” an official recently said. In Israel, the country’s internal security agency is poised to start using a cache of mobile phone location data — originally intended for counterterrorism operations — to try to pinpoint citizens who may have been exposed to the virus.
As countries around the world race to contain the pandemic, many are deploying digital surveillance tools as a means to exert social control, even turning security agency technologies on their own civilians. Health and law enforcement authorities are understandably eager to employ every tool at their disposal to try to hinder the virus — even as the surveillance efforts threaten to alter the precarious balance between public safety and personal privacy on a global scale. Yet ratcheting up surveillance to combat the pandemic now could permanently open the doors to more invasive forms of snooping later. It is a lesson Americans learned after the terrorist attacks of Sept. 11, 2001, civil liberties experts say. Nearly two decades later, law enforcement agencies have access to higher-powered surveillance systems, like fine-grained location tracking and facial recognition — technologies that may be repurposed to further political agendas like anti-immigration policies. Civil liberties experts warn that the public has little recourse to challenge these digital exercises of state power.
Banjo, an artificial intelligence firm that works with police used a shadow company to create an array of Android and iOS apps that looked innocuous but were specifically designed to secretly scrape social media. The news signifies an abuse of data by a government contractor, with Banjo going far beyond what companies which scrape social networks usually do. Banjo created a secret company named Pink Unicorn Labs, according to three former Banjo employees, with two of them adding that the company developed the apps. This was done to avoid detection by social networks, two of the former employees said.
Three of the apps created by Pink Unicorn Labs were called “One Direction Fan App,” “EDM Fan App,” and “Formula Racing App.” Motherboard found these three apps on archive sites and downloaded and analyzed them, as did an independent expert. The apps — which appear to have been originally compiled in 2015 and were on the Play Store until 2016 according to Google — outwardly had no connection to Banjo, but an analysis of its code indicates connections to the company. This aspect of Banjo’s operation has some similarities with the Cambridge Analytica scandal, with multiple sources comparing the two incidents. […] The company has not publicly explained how it specifically scrapes social media apps. Motherboard found the apps developed by Pink Unicorn Labs included code mentioning signing into Facebook, Twitter, Instagram, Russian social media app VK, FourSquare, Google Plus, and Chinese social network Sina Weibo. The apps could have scraped social media “by sending the saved login token to a server for Banjo to use later, or by using the app itself to scrape information,” reports Motherboard, noting that it’s not entirely clear which method Banjo used. “Motherboard found that the apps when opened made web requests to the domain ‘pulapi.com,’ likely referring to Pink Unicorn Labs, but the site that would provide a response to the app is currently down.”
Last weekend, Motherboard reported that Banjo signed a $20.7 million contract with Utah in 2019 that granted the company access to the state’s traffic, CCTV, and public safety cameras. “Banjo promises to combine that input with a range of other data such as satellites and social media posts to create a system that it claims alerts law enforcement of crimes or events in real-time.”
“So this is creepy,” writes a Forbes cybersecurity reporter, saying Airbnb “has put aside the stories of hosts secretly spying on guests” to promote a new line of devices Forbes calls “surveillance bugs to make sure guests behave.”
“… we’re hurtling toward a world where almost everything we own is monitoring us in some way, and I’m not sure that’s actually going to be a safer world.”
Amazon-owned home security camera company Ring has fired employees for improperly accessing Ring users’ video data, Motherboard reported Wednesday, citing a letter the company wrote to Senators. The news highlights a risk across many different tech companies: employees may abuse access granted as part of their jobs to look at customer data or information. In Ring’s case this data can be particularly sensitive though, as customers often put the cameras inside their home. “We are aware of incidents discussed below where employees violated our policies,” the letter from Ring, dated January 6th, reads. “Over the last four years, Ring has received four complaints or inquiries regarding a team member’s access to Ring video data,” it continues. Ring explains that although each of these people were authorized to view video data, their attempted access went beyond what they needed to access for their job.
As governments and companies invest more in security networks, hundreds of millions more surveillance cameras will be watching the world in 2021, mostly in China, according to a new report. The report, from industry researcher IHS Market, to be released Thursday, said the number of cameras used for surveillance would climb above 1 billion by the end of 2021. That would represent an almost 30% increase from the 770 million cameras today. China would continue to account for a little over half the total. Fast-growing, populous nations such as India, Brazil and Indonesia would also help drive growth in the sector, the report said. IHS analyst Oliver Philippou said government programs to implement widespread video surveillance to monitor the public would be the biggest catalyst for the growth in China. City surveillance also was driving demand elsewhere.
Police officers who download videos captured by homeowners’ Ring doorbell cameras can keep them forever and share them with whomever they’d like without providing evidence of a crime, the Amazon-owned firm told a lawmaker this month… Police in those communities can use Ring software to request up to 12 hours of video from anyone within half a square mile of a suspected crime scene, covering a 45-day time span, wrote Brian Huseman, Amazon’s vice president of public policy. Police are required to include a case number for the crime they are investigating, but not any other details or evidence related to the crime or their request.
Sen. Edward Markey, D-Mass., said in a statement that Ring’s policies showed that the company had failed to enact basic safeguards to protect Americans’ privacy. “Connected doorbells are well on their way to becoming a mainstay of American households, and the lack of privacy and civil rights protections for innocent residents is nothing short of chilling,” he said. “If you’re an adult walking your dog or a child playing on the sidewalk, you shouldn’t have to worry that Ring’s products are amassing footage of you and that law enforcement may hold that footage indefinitely or share that footage with any third parties.”
While Ring tells users not to film public roads are sidewalks, Ring isn’t enforcing that, according to the article. Amazon argues that that’s ultimately the user’s responsibility.
And will their cameras start using facial recognition algorithms? Amazon answers that that feature is “contemplated but unreleased,” though they add that “We do frequently innovate based on customer demand,” and point out that other competing security cameras are already offering facial-recognition.
Smart TVs are like regular television sets but with an internet connection. With the advent and growth of Netflix, Hulu and other streaming services, most saw internet-connected televisions as a cord-cutter’s dream. But like anything that connects to the internet, it opens up smart TVs to security vulnerabilities and hackers. Not only that, many smart TVs come with a camera and a microphone. But as is the case with most other internet-connected devices, manufacturers often don’t put security as a priority. That’s the key takeaway from the FBI’s Portland field office, which just ahead of some of the biggest shopping days of the year posted a warning on its website about the risks that smart TVs pose. “Beyond the risk that your TV manufacturer and app developers may be listening and watching you, that television can also be a gateway for hackers to come into your home. A bad cyber actor may not be able to access your locked-down computer directly, but it is possible that your unsecured TV can give him or her an easy way in the backdoor through your router,” wrote the FBI. The FBI warned that hackers can take control of your unsecured smart TV and in worst cases, take control of the camera and microphone to watch and listen in.
More than 60% of Americans think it’s impossible to go through daily life without being tracked by companies or the government, according to a new Pew Research study. It’s not just that Americans (correctly) think companies are collecting their data. They don’t like it. About 69% of Americans are skeptical that companies will use their private information in a way they’re comfortable with, while 79% don’t believe that companies will come clean if they misuse the information. When it comes to who they trust, there are differences by race. About 73% of black Americans, for instance, are at least a little worried about what law enforcement knows about them, compared with 56% of white Americans. But among all respondents, more than 80% were concerned about what social-media sites and advertisers might know. Despite these concerns, more than 80% of Americans feel they have no control over how their information is collected.
Google is teaming with one of the country’s largest health-care systems on a secret project to collect and crunch the detailed personal health information of millions of Americans across 21 states, WSJ reported Monday, citing people familiar with the matter and internal documents.
The initiative, code-named “Project Nightingale,” appears to be the largest in a series of efforts by Silicon Valley giants to gain access to personal health data and establish a toehold in the massive health-care industry. Amazon.com, Apple and Microsoft are also aggressively pushing into health care, though they haven’t yet struck deals of this scope. Google launched the effort last year with St. Louis-based Ascension, the country’s second-largest health system. The data involved in Project Nightingale includes lab results, doctor diagnoses and hospitalization records, among other categories, and amounts to a complete health history, complete with patient names and dates of birth.
Neither patients nor doctors have been notified. At least 150 Google employees already have access to much of the data on tens of millions of patients, according to a person familiar with the matter and the documents.
Google in this case is using the data in part to design new software, underpinned by advanced artificial intelligence and machine learning.
Google appears to be sharing information within Project Nightingale more broadly than in its other forays into health-care data. In September, Google announced a 10-year deal with the Mayo Clinic to store the hospital system’s genetic, medical and financial records.
Google co-founder Larry Page, in a 2014 interview, suggested that patients worried about the privacy of their medical records were too cautious. Mr. Page said: “We’re not really thinking about the tremendous good that can come from people sharing information with the right people in the right ways.”
What if Google and Amazon employees weren’t the only ones who’d listened through your voice assistant? Ars Technica reports:
The threat isn’t just theoretical. Whitehat hackers at Germany’s Security Research Labs developed eight apps — four Alexa “skills” and four Google Home “actions” — that all passed Amazon or Google security-vetting processes. The skills or actions posed as simple apps for checking horoscopes, with the exception of one, which masqueraded as a random-number generator. Behind the scenes, these “smart spies,” as the researchers call them, surreptitiously eavesdropped on users and phished for their passwords…
The apps gave the impression they were no longer running when they, in fact, silently waited for the next phase of the attack…. The apps quietly logged all conversations within earshot of the device and sent a copy to a developer-designated server. The phishing apps follow a slightly different path by responding with an error message that claims the skill or action isn’t available in that user’s country. They then go silent to give the impression the app is no longer running. After about a minute, the apps use a voice that mimics the ones used by Alexa and Google home to falsely claim a device update is available and prompts the user for a password for it to be installed….
In response, both companies removed the apps and said they are changing their approval processes to prevent skills and actions from having similar capabilities in the future.
Jack Newcombe, the Chief Operating Officer of a syndication company with 44 million daily readers, describes the strange voice he heard talking to his 18-month old son: She says we have a nice house and encourages the nanny to respond. She does not. The voice even jokes that she hopes we don’t change our password. I am sick to my stomach. After about five minutes of verbal “joy riding,” the voice starts to get agitated at the nanny’s lack of response and then snaps, in a very threatening voice: “I’m coming for the baby if you don’t answer me….” We unplug the cameras and change all passwords…
Still helpless, I started doing the only thing I could do — Googling. I typed “Nest + camera + hacked” and found out that this happens frequently. Parent after parent relayed stories similar to mine — threatening to steal a baby is shockingly common — and some much worse, such as playing pornography over the microphone to a 3-year-old… What is worse is that anyone could have been watching us at any time for as long as we have had the cameras up. This person just happened to use the microphone. Countless voyeurs could have been silently watching (or worse) for months.
However, what makes this issue even more terrifying is a corporate giant’s complete and utter lack of response. Nest is owned by Google, and, based on my experience and their public response, Google does not seem to care about this issue. They acknowledge it as a problem, shrug their shoulders and point their fingers at the users. Their party line is to remind people that the hardware was not hacked; it was the user’s fault for using a compromised password and not implementing two-step authentication, in which users receive a special code via text to sign on. That night, on my way home from work, I called Nest support and was on hold for an hour and eight minutes. I followed all directions and have subsequently received form emails in broken English. Nobody from Google has acknowledged the incident or responded with any semblance of empathy. In every email, they remind me of two-step authentication.
They act as if I am going to continue to use Nest cameras.
After being challenged as to whether homeowners should tell guests smart devices — such as a Google Nest speaker or Amazon Echo display — are in use before they enter the building, Google senior vice president of devices and services, Rick Osterloh, concludes that the answer is indeed yes. The BBC reports:
“Gosh, I haven’t thought about this before in quite this way,” Rick Osterloh begins. “It’s quite important for all these technologies to think about all users… we have to consider all stakeholders that might be in proximity.” And then he commits. “Does the owner of a home need to disclose to a guest? I would and do when someone enters into my home, and it’s probably something that the products themselves should try to indicate.”
To be fair to Google, it hasn’t completely ignored matters of 21st Century privacy etiquette until now. As Mr Osterloh points out, its Nest cameras shine an LED light when they are in record mode, which cannot be overridden. But the idea of having to run around a home unplugging or at least restricting the capabilities of all its voice- and camera-equipped kit if a visitor objects is quite the ask.
The concession came at the end of one-on-one interview given to BBC News to mark the launch of Google’s Pixel 4 smartphones, a new Nest smart speaker and other products. You can read the full conversation on the BBC’s article.
A new study from Princeton University shows internet-connected TVs, which allow people to stream Netflix and Hulu, are loaded with data-hungry trackers. “If you use a device such as Roku and Amazon Fire TV, there are numerous companies that can build up a fairly comprehensive picture of what you’re watching,” Arvind Narayanan, associate professor of computer science at Princeton, wrote in an email to The Verge. “There’s very little oversight or awareness of their practices, including where that data is being sold.” From the report: To understand how much surveillance is taking place on smart TVs, Narayanan and his co-author Hooman Mohajeri Moghaddam built a bot that automatically installed thousands of channels on their Roku and Amazon Fire TVs. It then mimicked human behavior by browsing and watching videos. As soon as it ran into an ad, it would track what data was being collected behind the scenes. Some of the information, like device type, city, and state, is hardly unique to one user. But other data, like the device serial number, Wi-Fi network, and advertising ID, could be used to pinpoint an individual. “This gives them a more complete picture of who you are,” said Moghaddam. He noted that some channels even sent unencrypted email addresses and video titles to the trackers.
In total, the study found trackers on 69 percent of Roku channels and 89 percent of Amazon Fire channels. “Some of these are well known, such as Google, while many others are relatively obscure companies that most of us have never heard of,” Narayanan said. Google’s ad service DoubleClick was found on 97 percent of Roku channels. “Like other publishers, smart TV app developers can use Google’s ad services to show ads against their content, and we’ve helped design industry guidelines for this that enable a privacy-safe experience for users,” a Google spokesperson said in a statement emailed to The Verge. “Depending on the user’s preferences, the developer may share data with Google that’s similar to data used for ads in mobile apps or on the web.” “Better privacy controls would certainly help, but they are ultimately band-aids,” Narayanan said. “The business model of targeted advertising on TVs is incompatible with privacy, and we need to confront that reality. To maximize revenue, platforms based on ad targeting will likely turn to data mining and algorithmic personalization/persuasion to keep people glued to the screen as long as possible.”
Another study from Northeastern University and the Imperial College of London found that other smart-home devices are also collecting reams of data that is being sent to third parties like advertisers and major tech companies.
In a promotional video, Amazon says its Cloud Cam home security camera provides “everything you need to monitor your home, day or night.” In fact, the artificially intelligent device requires help from a squad of invisible employees. Dozens of Amazon workers based in India and Romania review select clips captured by Cloud Cam, according to five people who have worked on the program or have direct knowledge of it. Those video snippets are then used to train the AI algorithms to do a better job distinguishing between a real threat (a home invader) and a false alarm (the cat jumping on the sofa). An Amazon team also transcribes and annotates commands recorded in customers’ homes by the company’s Alexa digital assistant, Bloomberg reported in April.
AI has made it possible to talk to your phone. It’s helping investors predict shifts in market sentiment. But the technology is far from infallible. Cloud Cam sends out alerts when it’s just paper rustling in a breeze. Apple’s Siri and Amazon’s Alexa still occasionally mishear commands. One day, engineers may overcome these shortfalls, but for now AI needs human assistance. Lots of it. At one point, on a typical day, some Amazon auditors were each annotating about 150 video recordings, which were typically 20 to 30 seconds long, according to the people, who requested anonymity to talk about an internal program.
Some of the Federal Bureau of Investigation’s electronic surveillance activities violated the constitutional privacy rights of Americans swept up in a controversial foreign intelligence program (Warning: source paywalled; alternative source), a secretive surveillance court has ruled. The ruling deals a rare rebuke to U.S. spying activities that have generally withstood legal challenge or review. The intelligence community disclosed Tuesday that the Foreign Intelligence Surveillance Court last year found that the FBI’s pursuit of data about Americans ensnared in a warrantless internet-surveillance program intended to target foreign suspects may have violated the law authorizing the program, as well as the Constitution’s Fourth Amendment protections against unreasonable searches.
The court concluded that the FBI had been improperly searching a database of raw intelligence for information on Americans — raising concerns about oversight of the program, which as a spy program operates in near total secrecy. The court ruling identifies tens of thousands of improper searches of raw intelligence databases by the bureau in 2017 and 2018 that it deemed improper in part because they involved data related to tens of thousands of emails or telephone numbers — in one case, suggesting that the FBI was using the intelligence information to vet its personnel and cooperating sources. Federal law requires that the database only be searched by the FBI as part of seeking evidence of a crime or for foreign intelligence information. In other cases, the court ruling reveals improper use of the database by individuals. In one case, an FBI contractor ran a query of an intelligence database — searching information on himself, other FBI personnel and his relatives, the court revealed.
The investigations, which targeted individual activists and some environmental organizations, were opened in 2013-2014, at the height of opposition to the Keystone XL Pipeline and the expansion of fossil fuel production in North America.
The new FOIA documents reveal the bureau’s motivation for investigating a broad cross section of the environmental movement and its characterization of non-violent protesters as a potential threat to national security.
In 2010, the DOJ’s inspector general criticized the FBI for using non-violent civil disobedience as grounds to open domestic terrorism investigations. US citizens swept up in such investigations can be placed on terrorism watchlists and subjected to surveillance and restrictions on international travel. The designation can also lead local law enforcement to take a more confrontational approach when engaging with non-violent activists.
Comparitech.com has published a report and spreadsheet laying out how many CCTV cameras are in operation in 120 different cities around the world, and data for the crime rates in these cities. The report notes “We found little correlation between the number of public CCTV cameras and crime or safety.”
8 of the 10 most surveilled cities are in China, even though London and Atlana also make the cut, and the report says that — depending on what numbers you believe — China will have between 200 Million and 626 Million CCTV cameras, or possibly even more, in operation by 2020. That would be almost 1 CCTV camera per 2 citizens in the country, and the number could go up.
Outside of China, the top most-surveilled cities in the world are:
London – 68.40 cameras per 1,000 people Atlanta – 15.56 cameras per 1,000 people Singapore – 15.25 cameras per 1,000 people Abu Dhabi – 13.77 cameras per 1,000 people Chicago – 13.06 cameras per 1,000 people Sydney – 12.35 cameras per 1,000 people Baghdad – 12.30 cameras per 1,000 people Dubai – 12.14 cameras per 1,000 people Moscow – 11.70 cameras per 1,000 people Berlin – 11.18 cameras per 1,000 people New Delhi – 9.62 cameras per 1,000 people …
The Washington Post describes a parent whose phone “rings 10 times a day with notifications from the summer camp’s facial-recognition service, which alerts him whenever one of his girls is photographed enjoying their newfound independence.”
You can also call your kid if you think they look unhappy or if you are unsatisfied with them in any way and nag them. So kids mob photographers with big, fake smiles and beg to be photographed so their parents won’t harass them.
The companies have “privacy policies” that grossly overreach, giving them perpetual licenses to distribute all the photos they take forever, for any purpose. They claim to have super-secure data-centers, but won’t describe what makes them so sure their data centers are more secure than, say, the NSA’s, Equifax, or any of the other “super secure” data centers that have been breached and dumped in recent memory.
And while parents enjoy all this looking at their kids while they’re away in theory, they also report a kind of free-floating anxiety because they know just enough about their kids’ lives at camp to worry, but not enough to assuage their worries.
One overseer of two camps tells the Post that more concerned parents call her in two hours than used to call in an entire month. One company adds that their service is now being used by over 160,000 parents — and for children as young as six.
At least one camp takes over 1,000 photos each day — scanning each one with facial recognition technology — and the Post reports that facial-recognition technology has now already been deployed at “hundreds” of summer camps all across the United States.
Most camp directors said they appreciate that the photos can bring peace of mind to lonely parents worried about their kids’ first faraway solo trip. But the photos can also end up perpetuating a cycle of parental anxiety: The more photos the camp posts, the more the parents seem to want – and the more questions they’ll ask about their kids.
When a camper isn’t smiling or is on the outside of a big group shot, counselors said they know to expect a phone call from back home. Liz Young, a longtime camp director now helping oversee two camps on the coast of New Hampshire’s Lake Winnipesaukee, said she now fields as many concerned-parents calls in two hours as she used to get all month – mostly from parents asking about how their kids look on camera, or whether they’re being photographed enough.
One camp, Lake of the Woods and Greenwoods in rural Decatur, Michigan, has four photographers and a social-media director on staff to help push nearly constant updates onto Bunk1, Facebook and Instagram, where recent photos of kids jumping into a lake or firing bows and arrows have netted hundreds of comments and “likes.” The facial-recognition system is in its second summer at the camp, and roughly half of all parents of its campers have signed up.
Some of the kids, Hardin said, are so accustomed to constant photography that they barely notice the camera crew. It’s the parents, she said, who struggle with the distance – and who are desperate for the reassurance the facial-recognition systems provide.
Some parents race to share the photos on social media as a way to curate their kids’ childhood and offer visual evidence that their family is worth envying.
The photos could inflame new tensions for kids hitting the age – generally, in the pre- and early teens – when they can start to feel awkward about all the photos their parents post. But they can also foster unease for kids questioning how much of their emotions and internal lives they’re comfortable sharing in every moment, even when they’re far from home.
“There’s the contradiction of these really old-fashioned summer camps with no electricity in the cabins, no cellphones . . . but the parents can check in daily to look at the expressions on their kids’ faces,” she added. “Part of childhood development is: It isn’t always 100 percent smiling.”