Resources

America’s Elderly Seem More Screen-Obsessed Than the Young

Many parents and grandparents will grumble about today’s screen-obsessed youth. Indeed, researchers find that millennials look at their phones more than 150 times a day; half of them check their devices in the middle of the night; a third glance at them immediately after waking up. And yet, when all screens are accounted for, it is in fact older folk who seem most addicted. According to Nielsen, a market-research firm, Americans aged 65 and over spend nearly ten hours a day consuming media on their televisions, computers and smartphones. That is 12% more than Americans aged 35 to 49, and a third more than those aged 18 to 34 (the youngest cohort for whom Nielsen has data).

American seniors “spend an average of seven hours and 30 minutes in front of the box, about as much as they did in 2015,” the report says. “The spend another two hours staring at their smartphones, a more than seven-fold increase from four years ago.”

Millennials have increased the time they spend on their mobile devices, but it’s been largely offset by their dwindling interest in TV. As for teenagers, a report from 2015 by Common Sense Media “found that American teens aged 13-18 spent about six hours and 40 minutes per day on screens: slightly more than Nielsen recorded for 18- to 34-year-olds that year, but less than older generations.”

FBI seeks to monitor Facebook, oversee mass social media data collection

The FBI is planning to aggressively harvest information from Facebook and Twitter, a move which is likely to cause a clash between the agency and social media platforms.

As reported by the Wall Street Journal, the FBI has recently sought proposals from third-party vendors for technological solutions able to harvest publicly-available information in bulk from Facebook, Twitter, and other social media outlets.

Law enforcement has requested the means to “obtain the full social media profile of persons-of-interest and their affiliation to any organization or groups,” to keep track of users based on their neighborhood, and keyword searches, among other tool functions.

While the FBI believes that such tools can work in harmony with privacy safeguards and civil liberties, the mass collection of names, photos, and IDs — when combined with information from other sources — may do just the opposite.

Back in July, for example, there was a public outcry after it was discovered that the FBI and Immigration and Customs Enforcement (ICE) were plundering databases belonging to the DMV for surveillance and investigative purposes.

Facebook Paid Contractors to Transcribe Users’ Audio Chats

Facebook Inc. has been paying hundreds of outside contractors to transcribe clips of audio from users of its services, according to people with knowledge of the work.

The work has rattled the contract employees, who are not told where the audio was recorded or how it was obtained — only to transcribe it, said the people, who requested anonymity for fear of losing their jobs. They’re hearing Facebook users’ conversations, sometimes with vulgar content, but do not know why Facebook needs them transcribed, the people said.

Facebook confirmed that it had been transcribing users’ audio and said it will no longer do so, following scrutiny into other companies. “Much like Apple and Google, we paused human review of audio more than a week ago,” the company said Tuesday. The company said the users who were affected chose the option in Facebook’s Messenger app to have their voice chats transcribed. The contractors were checking whether Facebook’s artificial intelligence correctly interpreted the messages, which were anonymized.

Big tech companies including Amazon.com Inc. and Apple Inc. have come under fire for collecting audio snippets from consumer computing devices and subjecting those clips to human review, a practice that critics say invades privacy. Bloomberg first reported in April that Amazon had a team of thousands of workers around the world listening to Alexa audio requests with the goal of improving the software, and that similar human review was used for Apple’s Siri and Alphabet Inc.’s Google Assistant. Apple and Google have since said they no longer engage in the practice and Amazon said it will let users opt out of human review.

The social networking giant, which just completed a $5 billion settlement with the U.S. Federal Trade Commission after a probe of its privacy practices, has long denied that it collects audio from users to inform ads or help determine what people see in their news feeds. Chief Executive Officer Mark Zuckerberg denied the idea directly in Congressional testimony.

In follow-up answers for Congress, the company said it “only accesses users’ microphone if the user has given our app permission and if they are actively using a specific feature that requires audio (like voice messaging features.)” The Menlo Park, California-based company doesn’t address what happens to the audio afterward.

Pentagon testing mass surveillance balloons across the US

The US military is conducting wide-area surveillance tests across six midwest states using experimental high-altitude balloons, documents filed with the Federal Communications Commission (FCC) reveal.

Up to 25 unmanned solar-powered balloons are being launched from rural South Dakota and drifting 250 miles through an area spanning portions of Minnesota, Iowa, Wisconsin and Missouri, before concluding in central Illinois.

Travelling in the stratosphere at altitudes of up to 65,000ft, the balloons are intended to “provide a persistent surveillance system to locate and deter narcotic trafficking and homeland security threats”, according to a filing made on behalf of the Sierra Nevada Corporation, an aerospace and defence company.

The balloons are carrying hi-tech radars designed to simultaneously track many individual vehicles day or night, through any kind of weather.

A rival balloon operator World View recently announced that it had carried out multi-week test missions in which its own stratospheric balloons were able to hover over a five-mile-diameter area for six and a half hours, and larger areas for days at a time.

Ryan Hartman, CEO of World View, said that World View had also completed a dozen surveillance test missions for a customer it would not name, capturing data he would not specify.

“Obviously, there are laws to protect people’s privacy and we are respectful of all those laws,” Hartman said. “We also understand the importance of operating in an ethical way as it relates to further protecting people’s privacy.”

You’re very easy to track down, even when your data has been anonymized

The most common way public agencies protect our identities is anonymization. This involves stripping out obviously identifiable things such as names, phone numbers, email addresses, and so on. Data sets are also altered to be less precise, columns in spreadsheets are removed, and “noise” is introduced to the data. Privacy policies reassure us that this means there’s no risk we could be tracked down in the database. However, a new study in Nature Communications suggests this is far from the case. Researchers from Imperial College London and the University of Louvain have created a machine-learning model that estimates exactly how easy individuals are to reidentify from an anonymized data set. You can check your own score here, by entering your zip code, gender, and date of birth.

On average, in the U.S., using those three records, you could be correctly located in an “anonymized” database 81% of the time. Given 15 demographic attributes of someone living in Massachusetts, there’s a 99.98% chance you could find that person in any anonymized database. The tool was created by assembling a database of 210 different data sets from five sources, including the U.S. Census. The researchers fed this data into a machine-learning model, which learned which combinations are more nearly unique and which are less so, and then assigns the probability of correct identification.

How America’s Tech Giants Are Helping Build China’s Surveillance State

The OpenPower Foundation — a nonprofit led by Google and IBM executives with the aim of trying to “drive innovation” — has set up a collaboration between IBM, Chinese company Semptian, and U.S. chip manufacturer Xilinx. Together, they have worked to advance a breed of microprocessors that enable computers to analyze vast amounts of data more efficiently. Shenzhen-based Semptian is using the devices to enhance the capabilities of internet surveillance and censorship technology it provides to human rights-abusing security agencies in China, according to sources and documents. A company employee said that its technology is being used to covertly monitor the internet activity of 200 million people…

Semptian presents itself publicly as a “big data” analysis company that works with internet providers and educational institutes. However, a substantial portion of the Chinese firm’s business is in fact generated through a front company named iNext, which sells the internet surveillance and censorship tools to governments. iNext operates out of the same offices in China as Semptian, with both companies on the eighth floor of a tower in Shenzhen’s busy Nanshan District. Semptian and iNext also share the same 200 employees and the same founder, Chen Longsen. [The company’s] Aegis equipment has been placed within China’s phone and internet networks, enabling the country’s government to secretly collect people’s email records, phone calls, text messages, cellphone locations, and web browsing histories, according to two sources familiar with Semptian’s work.

Promotional documents obtained from the company promise “location information for everyone in the country.” One company representative even told the Intercept they were processing “thousands of terabits per second,” and — not knowing they were talking to a reporter — forwarded a 16-minute video detailing their technology. “If a government operative enters a person’s cellphone number, Aegis can show where the device has been over a given period of time: the last three days, the last week, the last month, or longer,” the Intercept reports.

Can Google Ads Change Your Beliefs?

There are advertisers in the digital marketing industry who want to find out how effective this new form of social engineering is. One of those advertisers is Google. Redirect Method was a Google-incubated project that used redirect ads to deradicalize would-be extremists. In the first eight weeks of 2016, some 320,000 people — all of whom were believed to harbor sympathy toward the Islamic State — clicked on ads designed to reflect an interest in extremist content. Instead of arriving at a page that supported their views, Islamic State sympathizers who clicked the ads found themselves directed to a playlist of videos debunking the terror group’s recruitment narratives. Most of the visitors stuck around. Together, they watched more than half a million minutes of video.

After the ISIS campaign ended, Google left behind a blueprint.

The blueprint shows, step by step, how you can create your own redirect ads to sway any belief or opinion — held by any Google user, anywhere in the world — of your choice. You don’t have to be a marketer with years of experience to do this. You just need to follow the instructions and put up a credit card (a few hundred bucks will suffice). Recently, I followed the blueprint and created a redirect campaign of my own… The goal of my first redirect campaign was to sway the ideology of suicidal people.

Nearly one in three apparently suicidal searchers who viewed his ad then dialed his hotline number — which then forwarded the calls to the National Suicide Prevention Lifeline. But he expressed surprise that Google “let me run the ads with no issue… I didn’t need qualifications to be a conduit of peoples’ fates.” He later tried creating another campaign for prospective school shooters, “but the conversion rates were low. These prospective shooters were reluctant to speak with someone.”

Yet one study found that more than 50% of people using Google couldn’t distinguish between ads and “organic” results on Google.

Amazon Alexa Keeps Your Data With No Expiration Date, and Shares It Too

If you have hangups about Amazon and privacy on its smart assistant, Alexa, you’re not alone. Even after Amazon sent answers to a US senator who had questions about how the tech giant retains voice data and transcripts, the lawmaker remains concerned about Alexa’s privacy practices. From a report:

Sen. Chris Coons, a Democrat from Delaware, sent a letter to Amazon CEO Jeff Bezos in May, demanding answers on Alexa and how long it kept voice recordings and transcripts, as well as what the data gets used for. The letter came after CNET’s report that Amazon kept transcripts of interactions with Alexa, even after people deleted the voice recordings. The deadline for answers was June 30, and Amazon’s vice president of public policy, Brian Huseman, sent a response on June 28. In the letter, Huseman tells Coons that Amazon keeps transcripts and voice recordings indefinitely, and only removes them if they’re manually deleted by users. Huseman also noted that Amazon had an “ongoing effort to ensure those transcripts do not remain in any of Alexa’s other storage systems.” But there are still records from some conversations with Alexa that Amazon won’t delete, even if people remove the audio, the letter revealed.

A Researcher Attempted To Opt Out of Facial Recognition at the Airport — It Wasn’t Easy

The announcement came as we began to board. Last month, I was at Detroit’s Metro Airport for a connecting flight to Southeast Asia. I listened as a Delta Air Lines staff member informed passengers that the boarding process would use facial recognition instead of passport scanners. As a privacy-conscious person, I was uncomfortable boarding this way. I also knew I could opt out. Presumably, most of my fellow fliers did not: I didn’t hear a single announcement alerting passengers how to avoid the face scanners.

To figure out how to do so, I had to leave the boarding line, speak with a Delta representative at their information desk, get back in line, then request a passport scan when it was my turn to board. Federal agencies and airlines claim that facial recognition is an opt-out system, but my recent experience suggests they are incentivizing travelers to have their faces scanned — and disincentivizing them to sidestep the tech — by not clearly communicating alternative options. Last year, a Delta customer service representative reported that only 2 percent of customers opt out of facial-recognition. It’s easy to see why.

Smartphones and Fitness Trackers Are Being Used To Gauge Employee Performance

The passive system incorporates an app known as PhoneAgent, which was developed by Prof. Andrew Campbell at New Hampshire’s Dartmouth College. Using the smartphone’s own sensors, that app continuously monitors factors such as the worker’s phone usage, physical activity level, geographical location, and the ambient light levels of their environment. PhoneAgent is also Bluetooth-linked to a fitness bracelet worn by the employee, which transmits data including their heart functions, sleep quality, stress levels, and calorie consumption. Additionally, Bluetooth locational beacons in the person’s home and workplace monitor how much time they spend at each place, and how often they leave their workstation.

All of the phone, bracelet and beacon data is transmitted to a cloud-based server, where it’s processed via machine-learning algorithms that were “trained” on the habits of people already known to be high- or low-level performers. When tested on 750 workers across the U.S. over a one-year period, the system was reportedly able to distinguish between individuals’ performance levels (in a variety of industries) with an accuracy of 80 percent. That number should rise as the system is developed further.

When Myspace Was King, Employees Abused a Tool Called ‘Overlord’ to Spy on Users

During the social network’s heyday, multiple Myspace employees abused an internal company tool to spy on users, in some cases including ex-partners, Motherboard reported on Monday.

Named ‘Overlord,’ the tool allowed employees to see users’ passwords and their messages, two former employees said. While the tool was originally designed to help moderate the platform and allow MySpace to comply with law enforcement requests, multiple sources said the tool was used for illegitimate purposes by employees who accessed Myspace user data without authorization to do so. “It was basically an entire backdoor to the Myspace platform,” one of the former employees said of Overlord. (Motherboard granted five former Myspace employees anonymity to discuss internal Myspace incidents.) The abuse happened about a decade ago, closer to the height of the platform’s popularity, according to multiple sources. In fall 2006, the platform signed up its 100 millionth user. Around this time, Myspace was the second most popular website in the U.S., and ranked higher than Google search.

Google Chrome ‘Has Become Surveillance Software. It’s Time to Switch’

“You open your browser to look at the Web. Do you know who is looking back at you?” warns Washington Post technology columnist Geoffrey A. Fowler.

Over a recent week of Web surfing, I peered under the hood of Google Chrome and found it brought along a few thousand friends. Shopping, news and even government sites quietly tagged my browser to let ad and data companies ride shotgun while I clicked around the Web. This was made possible by the Web’s biggest snoop of all: Google. Seen from the inside, its Chrome browser looks a lot like surveillance software…

My tests of Chrome vs. Firefox unearthed a personal data caper of absurd proportions. In a week of Web surfing on my desktop, I discovered 11,189 requests for tracker “cookies” that Chrome would have ushered right onto my computer but were automatically blocked by Firefox. These little files are the hooks that data firms, including Google itself, use to follow what websites you visit so they can build profiles of your interests, income and personality… And that’s not the half of it. Look in the upper right corner of your Chrome browser. See a picture or a name in the circle? If so, you’re logged in to the browser, and Google might be tapping into your Web activity to target ads. Don’t recall signing in? I didn’t, either. Chrome recently started doing that automatically when you use Gmail.

Chrome is even sneakier on your phone. If you use Android, Chrome sends Google your location every time you conduct a search. (If you turn off location sharing it still sends your coordinates out, just with less accuracy.)

The columnist concludes that “having the world’s biggest advertising company make the most popular Web browser was about as smart as letting kids run a candy shop,” and argues that through its Doubleclick and other ad businesses, Google “is the No. 1 cookie maker — the Mrs. Fields of the web.”

Millions of Business Listings On Google Maps Are Fake — and Google Profits

Google’s ubiquitous internet platform shapes what’s real and what isn’t for more than two billion monthly users. Yet Google Maps is overrun with millions of false business addresses and fake names, according to advertisers, search experts and current and former Google employees. The ruse lures the unsuspecting to what appear to be Google-suggested local businesses, a costly and dangerous deception. Once considered a sleepy, low-margin business by the company and known mostly for giving travel directions, Google Maps in recent months has packed more ads onto its search queries. It is central to Google parent Alphabet’s hope to recharge a cresting digital-advertising operation.

Often, Google Maps yields mirages, visible in local business searches of U.S. cities, including Mountain View, Calif., Google’s hometown. Of a dozen addresses for personal-injury attorneys on Google Maps during a recent search, only one office was real. A Viennese patisserie was among the businesses at addresses purported to house lawyers. The fakes vanished after inquiries to Google from The Wall Street Journal. The false listings benefit businesses seeking more customer calls by sprinkling made-up branches in various corners of a city. In other cases, as Ms. Carter discovered, calls to listed phone numbers connect to unscrupulous competitors, a misdirection forbidden by Google rules but sporadically policed by the company. Hundreds of thousands of false listings sprout on Google Maps each month, according to experts. Google says it catches many others before they appear.

Bones are changing in surprising ways: Phone Use is To Blame, Research Suggests

New research in biomechanics suggests that young people are developing hornlike spikes at the back of their skulls — bone spurs caused by the forward tilt of the head, which shifts weight from the spine to the muscles at the back of the head, causing bone growth in the connecting tendons and ligaments. The weight transfer that causes the buildup can be compared to the way the skin thickens into a callus as a response to pressure or abrasion.

The result is a hook or hornlike feature jutting out from the skull, just above the neck. In academic papers, a pair of researchers at the University of the Sunshine Coast in Queensland, Australia, argues that the prevalence of the bone growth in younger adults points to shifting body posture brought about by the use of modern technology. They say smartphones and other handheld devices are contorting the human form, requiring users to bend their heads forward to make sense of what’s happening on the miniature screens.

Of course, bad posture was not invented in the 21st Century – people have always found something to hunch over. So why didn’t we get the skull protuberances from books? One possibility is down to the sheer amount of time that we currently spend on our phones, versus how long a person would previously have spent reading. For example, even in 1973, well before most modern hand-held distractions were invented, the average American typically read for about two hours each day. In contrast, today people are spending nearly double that time on their phones.

How Information is Like Snacks, Money, and Drugs To Your Brain

A new study by researchers at UC Berkeley’s Haas School of Business has found that information acts on the brain’s dopamine-producing reward system in the same way as money or food.

“To the brain, information is its own reward, above and beyond whether it’s useful,” says Assoc. Prof. Ming Hsu, a neuroeconomist. “And just as our brains like empty calories from junk food, they can overvalue information that makes us feel good but may not be useful — what some may call idle curiosity.” The paper, “Common neural code for reward and information value,” was published this month by the Proceedings of the National Academy of Sciences. Authored by Hsu and graduate student Kenji Kobayashi, now a post-doctoral researcher at the University of Pennsylvania, it demonstrates that the brain converts information into the same common scale as it does for money. It also lays the groundwork for unraveling the neuroscience behind how we consume information — and perhaps even digital addiction.

New Deepfake Algorithm Allows You To Text-Edit the Words of a Speaker In a Video

It is now possible to take a talking-head style video, and add, delete or edit the speaker’s words as simply as you’d edit text in a word processor. A new deepfake algorithm can process the audio and video into a new file in which the speaker says more or less whatever you want them to. New Atlas reports:

It’s the work of a collaborative team from Stanford University, Max Planck Institute for Informatics, Princeton University and Adobe Research, who say that in a perfect world the technology would be used to cut down on expensive re-shoots when an actor gets something wrong, or a script needs to be changed. In order to learn the face movements of a speaker, the algorithm requires about 40 minutes of training video, and a transcript of what’s being said, so it’s not something that can be thrown onto a short video snippet and run if you want good results. That 40 minutes of video gives the algorithm the chance to work out exactly what face shapes the subject is making for each phonetic syllable in the original script.

From there, once you edit the script, the algorithm can then create a 3D model of the face making the new shapes required. And from there, a machine learning technique called Neural Rendering can paint the 3D model over with photo-realistic textures to make it look basically indistinguishable from the real thing. Other software such as VoCo can be used if you wish to generate the speaker’s audio as well as video, and it takes the same approach, by breaking down a heap of training audio into phonemes and then using that dataset to generate new words in a familiar voice.

Yahoo Japan Is Under Fire for Its China-Like Rating System

Some users of Yahoo Japan are rising up against Japan’s biggest web portal after the rollout of a new rating system that’s being compared with a social-scoring initiative in China. The 48 million people with a Yahoo! Japan ID will have to opt-out within a privacy settings webpage if they don’t want to be rated. The score is based on a variety of factors and is calculated based on inputs such as payment history, shopping reviews, whether a user canceled bookings and the amount of identifiable personal information. Unless users opt out, their ratings may be accessible to freelance jobs site Crowdworks, Yahoo’s bike-sharing service and other businesses. Makoto Niida, a longtime Yahoo user, opted out of the rating system when he learned about it. “It’s a big deal that the service was enabled by default,” Niida said. “The way they created services that benefit businesses without clear explanations to their users reminds me of Chinaâ(TM)s surveillance society.” Yahoo’s new credit-score program follows efforts by Mizuho Financial Group, NTT Docomo and other companies to use algorithms to assign ratings to consumers. Japan doesn’t have a system similar to FICO in the U.S., so businesses in the world’s third-largest economy have come up with their own solutions to determine financial trustworthiness.

“Upgrade Your Memory With A Surgically Implanted Brain Chip”

In a five-year, $77 million project by the Department of Defense to create an implantable brain device that restores memory-generation capacity for people with traumatic brain injuries, a device has now been developed by Michael Kahana, a professor of psychology at the University of Pennsylvania, and the medical technology company Medtronic Plc, and successfully tested with funding from America’s Defense Advanced Research Projects Agency (Darpa).

Connected to the left temporal cortex, it monitors the brain’s electrical activity and forecasts whether a lasting memory will be created. “Just like meteorologists predict the weather by putting sensors in the environment that measure humidity and wind speed and temperature, we put sensors in the brain and measure electrical signals,” Kahana says. If brain activity is suboptimal, the device provides a small zap, undetectable to the patient, to strengthen the signal and increase the chance of memory formation.

In two separate studies, researchers found the prototype consistently boosted memory 15 per cent to 18 per cent. The second group performing human testing, a team from Wake Forest Baptist Medical Center in Winston-Salem, N.C., aided by colleagues at the University of Southern California, has a more finely tuned method. In a study published last year, their patients showed memory retention improvement of as much as 37 per cent. “We’re looking at questions like, ‘Where are my keys? Where did I park the car? Have I taken my pills?’â” says Robert Hampson, lead author of the 2018 study…

Both groups have tested their devices only on epileptic patients with electrodes already implanted in their brains to monitor seizures; each implant requires clunky external hardware that won’t fit in somebody’s skull. The next steps will be building smaller implants and getting approval from the U.S. Food and Drug Administration to bring the devices to market… Justin Sanchez, who just stepped down as director of Darpa’s biological technologies office, says veterans will be the first to use the prosthetics. “We have hundreds of thousands of military personnel with traumatic brain injuries,” he says. The next group will likely be stroke and Alzheimer’s patients.

Eventually, perhaps, the general public will have access—though there’s a serious obstacle to mass adoption. “I don’t think any of us are going to be signing up for voluntary brain surgery anytime soon,” Sanchez says. “Only when these technologies become less invasive, or noninvasive, will they become widespread.”

Do Google and Facebook Threaten Our ‘Ambient Privacy’?

Until recently, ambient privacy was a simple fact of life. Recording something for posterity required making special arrangements, and most of our shared experience of the past was filtered through the attenuating haze of human memory. Even police states like East Germany, where one in seven citizens was an informer, were not able to keep tabs on their entire population. Today computers have given us that power. Authoritarian states like China and Saudi Arabia are using this newfound capacity as a tool of social control. Here in the United States, we’re using it to show ads. But the infrastructure of total surveillance is everywhere the same, and everywhere being deployed at scale….

Because our laws frame privacy as an individual right, we don’t have a mechanism for deciding whether we want to live in a surveillance society. Congress has remained silent on the matter, with both parties content to watch Silicon Valley make up its own rules. The large tech companies point to our willing use of their services as proof that people don’t really care about their privacy. But this is like arguing that inmates are happy to be in jail because they use the prison library. Confronted with the reality of a monitored world, people make the rational decision to make the best of it.

That is not consent…

Our discourse around privacy needs to expand to address foundational questions about the role of automation: To what extent is living in a surveillance-saturated world compatible with pluralism and democracy? What are the consequences of raising a generation of children whose every action feeds into a corporate database? What does it mean to be manipulated from an early age by machine learning algorithms that adaptively learn to shape our behavior? That is not the conversation Facebook or Google want us to have. Their totalizing vision is of a world with no ambient privacy and strong data protections, dominated by the few companies that can manage to hoard information at a planetary scale. They correctly see the new round of privacy laws as a weapon to deploy against smaller rivals, further consolidating their control over the algorithmic panopticon.

Amazon’s ‘Ring’ Doorbells Creating A Massive Police Surveillance Network

“Police departments are piggybacking on Ring’s network to build out their surveillance networks…” reports CNET, adding that Ring “helps police avoid roadblocks for surveillance technology, whether a lack of funding or the public’s concerns about privacy.”

While residential neighborhoods aren’t usually lined with security cameras, the smart doorbell’s popularity has essentially created private surveillance networks powered by Amazon and promoted by police departments. Police departments across the country, from major cities like Houston to towns with fewer than 30,000 people, have offered free or discounted Ring doorbells to citizens, sometimes using taxpayer funds to pay for Amazon’s products.

While Ring owners are supposed to have a choice on providing police footage, in some giveaways, police require recipients to turn over footage when requested. Ring said Tuesday that it would start cracking down on those strings attached…

While more surveillance footage in neighborhoods could help police investigate crimes, the sheer number of cameras run by Amazon’s Ring business raises questions about privacy involving both law enforcement and tech giants… More than 50 local police departments across the US have partnered with Ring over the last two years, lauding how the Amazon-owned product allows them to access security footage in areas that typically don’t have cameras — on suburban doorsteps. But privacy advocates argue this partnership gives law enforcement an unprecedented amount of surveillance. “What we have here is a perfect marriage between law enforcement and one of the world’s biggest companies creating conditions for a society that few people would want to be a part of,” said Mohammad Tajsar, staff attorney at the ACLU of Southern California…

Despite its benefits, the relationship between police departments and Ring raises concerns about surveillance and privacy, as Amazon is working with law enforcement to blanket communities with cameras…. “Essentially, we’re creating a culture where everybody is the nosy neighbor looking out the window with their binoculars,” said Dave Maass, a senior investigative researcher at the Electronic Frontier Foundation. “It is creating this giant pool of data that allows the government to analyze our every move, whether or not a crime is being committed.” On a heat map of Bloomfield, there are hardly any spots in the New Jersey township out of sight of a Ring camera.

Tajsar says in some scenarios “they’re basically commandeering people’s homes as surveillance outposts for law enforcement,” and the articles notes that when police departments partner with Ring, “they have access to a law enforcement dashboard, where they can geofence areas and request footage filmed at specific times.”

While law enforcement “can only get footage from the app if residents choose to send it,” if the residents refuse, police can still try to obtain the footage with a subpoena to Amazon’s Ring.