Resources

Suspicious packages spotlight vast mail surveillance system, United States

The U.S. Postal Service regularly photographs the front and back of every piece of U.S. mail, or about 150 billion parcels, envelopes, and postcards every year. A longstanding practice known as the “mail cover” program enables law enforcement to obtain address information and images of the outsides of mail as part of an investigation without the need for a warrant through the Postal Inspection Service, the U.S. Postal Service’s policing arm.

As part of the mail cover program, mail is routinely digitally photographed as part of the sorting process and even available for recipients to digitally preview in some areas. Apart from threats like bombs, the department says its main focus is on mail theft, fraud, and narcotics cases.

Because a mail cover involves reading only information on the outside of the envelope or package, courts have not ruled it a violation of the Fourth Amendment. But that hasn’t stilled concerns about privacy and abuse.

For decades, the relatively obscure program has come under criticism for its lack of protections, for allowing data to be shared in broader cases than postal regulations allow, and for operating largely outside of public view. Critics have also warned that extensive surveillance of someone’s mail, especially combined with other surveillance, could create privacy violations.

After an audit, the Postal Service inspector general determined in 2014 that the Inspection Service did not have “sufficient controls” in place to ensure that its employees followed the agency’s policies in handling national security mail covers.

“Insufficient controls over the mail covers program could hinder the Postal Inspection Service’s ability to conduct effective investigations, lead to public concerns over privacy of mail, and harm the Postal Service’s brand,” it warned in a report.

Mail covers drew fire in the 1970s as part of government surveillance of groups deemed subversive, after a 15-year-old girl was apparently put under surveillance for writing a letter to a radical group. She had reportedly actually intended to contact a similarly named organization as part of a homework assignment.

678

With 5G, you won’t just be watching video. It’ll be watching you, too

What happens when movies can direct themselves? Remember the last time you felt terrified during a horror movie? Take that moment, and all the suspense leading up to it, and imagine it individually calibrated for you. It’s a terror plot morphing in real time, adjusting the story to your level of attention to lull you into a comfort zone before unleashing a personally timed jumpscare.

Or maybe being scared witless isn’t your idea of fun. Think of a rom-com that stops from going off the rails when it sees you rolling your eyes. Or maybe it tweaks the eye color of that character finally finding true love so it’s closer to your own, a personalized subtlety to make the love-struck protagonist more relatable.

You can thank (or curse) 5G for that.

When most people think of 5G, they’re envisioning an ultra-fast, high-bandwidth connection that lets you download seasons of your favorite shows in minutes. But 5G’s possibilities go way beyond that, potentially reinventing how we watch video, and opening up a mess of privacy uncertainties.

“Right now you make a video much the same way you did for TV,” Dan Garraway, co-founder of interactive video company Wirewax, said in an interview this month. “The dramatic thing is when you turn video into a two-way conversation. Your audience is touching and interacting inside the experience and making things happen as a result.” The personalized horror flick or tailored rom-com? They would hinge on interactive video layers that use emotional analysis based on your phone’s front-facing camera to adjust what you’re watching in real time. You may think it’s far-fetched, but one of key traits of 5G is an ultra-responsive connection with virtually no lag, meaning the network and systems would be fast enough to react to your physical responses.

Before you cast a skeptical eye at 5G, consider how the last explosion of mobile connectivity, from 3G to 4G LTE, changed how we consumed video. Being able to watch — and in YouTube’s case, upload — video on a mobile device reimagined how we watch TV and the types of programming that are big business. A decade ago, when Netflix was about two years into its transition to streaming from DVD mailings, its annual revenue $1.4 billion. This year it’s on track for more than 10 times that ($15.806 billion).

5G’s mobility can bring video experiences to new locations. Spare gives the example straight out of Minority Report, of entering a Gap retail store and being greeted by name. But taken further, the store could develop a three-dimensional video concierge for your phone — a pseudo-hologram that helps you find what you’re looking for. With 5G’s ability to make virtual and augmented reality more accessible, you could get a snapshot of what an outfit might look like on you without having to try it on.

Where things get crazy — and creepy — is imagining how 5G enables video to react to your involuntary cues and all the data you unconsciously provide. A show could mimic the weather or time of day to more closely match the atmosphere in real life.

For all the eye-popping possibilities, 5G unleashes a tangle of privacy questions. 5G could leverage every piece of visual information a phone can see on cameras front and back in real time. This level of visual imagery collection could pave the way for video interaction to happen completely automatically.

It’s also a potential privacy nightmare. But the lure of billions of dollars have already encouraged companies to make privacy compromises.

And that may make it feel like your personalized horror show is already here.

758

Now Apps Can Track You Even After You Uninstall Them

If it seems as though the app you deleted last week is suddenly popping up everywhere, it may not be mere coincidence. Companies that cater to app makers have found ways to game both iOS and Android, enabling them to figure out which users have uninstalled a given piece of software lately—and making it easy to pelt the departed with ads aimed at winning them back.

Adjust, AppsFlyer, MoEngage, Localytics, and CleverTap are among the companies that offer uninstall trackers, usually as part of a broader set of developer tools. Their customers include T-Mobile US, Spotify Technology, and Yelp. (And Bloomberg Businessweek parent Bloomberg LP, which uses Localytics.) Critics say they’re a fresh reason to reassess online privacy rights and limit what companies can do with user data.

Uninstall tracking exploits a core element of Apple Inc.’s and Google’s mobile operating systems: push notifications. Developers have always been able to use so-called silent push notifications to ping installed apps at regular intervals without alerting the user—to refresh an inbox or social media feed while the app is running in the background, for example. But if the app doesn’t ping the developer back, the app is logged as uninstalled, and the uninstall tracking tools add those changes to the file associated with the given mobile device’s unique advertising ID, details that make it easy to identify just who’s holding the phone and advertise the app to them wherever they go.

At its best, uninstall tracking can be used to fix bugs or otherwise refine apps without having to bother users with surveys or more intrusive tools. But the ability to abuse the system beyond its original intent exemplifies the bind that accompanies the modern internet, says Gillula. To participate, users must typically agree to share their data freely, probably forever, not knowing exactly how it may be used down the road. “As an app developer, I would expect to be able to know how many people have uninstalled an app,” he says. “I would not say that, as an app developer, you have a right to know exactly who installed and uninstalled your app.”

741

A Future Where Everything Becomes a Computer Is As Creepy As You Feared

More than 40 years ago, Bill Gates and Paul Allen founded Microsoft with a vision for putting a personal computer on every desk. […] In recent years, the tech industry’s largest powers set their sights on a new target for digital conquest. They promised wild conveniences and unimaginable benefits to our health and happiness. There’s just one catch, which often goes unstated: If their novelties take off without any intervention or supervision from the government, we could be inviting a nightmarish set of security and privacy vulnerabilities into the world. And guess what. No one is really doing much to stop it. The industry’s new goal? Not a computer on every desk nor a connection between every person, but something grander: a computer inside everything, connecting everyone.

Cars, door locks, contact lenses, clothes, toasters, refrigerators, industrial robots, fish tanks, sex toys, light bulbs, toothbrushes, motorcycle helmets — these and other everyday objects are all on the menu for getting “smart.” Hundreds of small start-ups are taking part in this trend — known by the marketing catchphrase “the internet of things” — but like everything else in tech, the movement is led by giants, among them Amazon, Apple and Samsung. [American cryptographer and computer security professional Bruce Schneier] argues that the economic and technical incentives of the internet-of-things industry do not align with security and privacy for society generally. Putting a computer in everything turns the whole world into a computer security threat. […] Mr. Schneier says only government intervention can save us from such emerging calamities. “I can think of no industry in the past 100 years that has improved its safety and security without being compelled to do so by government.”

847

Facial recognition used to identify and catalogue animals

Salmon are just the latest entry in a growing cornucopia of animal faces loaded into databases. For some animals, the biometric data gathered from them is being used to aid in conservation efforts. For others, the resulting AI could help ward off poachers. While partly creepy and partly very cute, monitoring of these animals can both help protect their populations and ensure safe, traceable livestock for developing communities…

U.K. researchers are using online resources like Flickr and Instagram to help build and strengthen a database that will eventually help track global tiger populations in real time. Once collected, the photos are analyzed by everyday people in a free app called Wildsense… The mighty lion is being surveilled too. Conservationists and wildlife teachers are using facial recognition to keep tabs on a database of over 1,000 lions… Wildlife experts are tracking elephants to protect them from encroaching poachers. Using Google’s Cloud AutoML Vision machine learning software, the technology will uniquely identify elephants in the wild. According to the Evening Standard, the tech will even send out an alert if it detects poachers in the same frame.

The story of whale facial tracking is one of crowdsourcing success. After struggling to distinguish specific whales from one another on his own, marine biologist Christian Khan uploaded the photos to data-competition site Kaggle and, within four months, data-science company Deepsense was able to accurately detect individual whale faces with 87% accuracy. Since then, detection rates have steadily improved and are helping conservationists track and monitor the struggling aquatic giant.

U.S. researchers are trying to protect “the world’s most endangered animal” with LemurFaceID, which is able to accurately differentiate between two lemur faces with 97% accuracy. But “In the livestock surveillance arms race China is definitely leading the charge,” the article notes, citing e-commerce giant JD.com and its use of facial recognition to monitor herds of pigs to detect their age, weight, and diet.

And one Chinese company even offers a blockchain-based chicken tracking system (codenamed “GoGo Chicken”) with an app that can link a grocery store chicken to “its birthplace, what food it ate and how many steps it walked during its life.”

691

Bloomberg’s Spy Chip Story Reveals the Murky World of National Security Reporting

Chinese spies reportedly infiltrated the supply chain and installed tiny chips the size of a pencil tip on the motherboards built by Supermicro, which are used in data center servers across the U.S. tech industry — from Apple to Amazon. That chip can compromise data on the server, allowing China to spy on some of the world’s most wealthy and powerful companies. Apple, Amazon and Supermicro — and the Chinese government — strenuously denied the allegations. Apple also released its own standalone statement later in the day, as did Supermicro.

Welcome to the murky world of national security reporting.

I’ve covered cybersecurity and national security for about five years, most recently at CBS, where I reported exclusively on several stories — including the U.S. government’s covert efforts to force tech companies to hand over their source code in an effort to find vulnerabilities and conduct surveillance. And last year I revealed that the National Security Agency had its fifth data breach in as many years, and classified documents showed that a government data collection program was far wider than first thought and was collecting data on U.S. citizens.

Even with this story, my gut is mixed.

In the aftermath of the disclosure of PRISM, the NSA’s data pulling program that implicated several tech companies — including Apple, but not Amazon — the companies came out fighting, vehemently denying any involvement or connection. Was it a failure of reporting? Partially, yes. But the companies also had plausible deniability by cherry picking what they rebuffed. Despite a claim by the government that PRISM had “direct access” to tech companies’ servers, the companies responded that this wasn’t true. They didn’t, however, refute indirect access — which the companies wouldn’t be allowed to say in any case.

815

Facebook Is Teeming With Fake Accounts Created By Undercover Cops

In the summer of 2015, as Memphis exploded with protests over the police killing of a 19-year-old man, activists began hearing on Facebook from someone called Bob Smith. The name was generic, and so was his profile picture: a Guy Fawkes mask, the symbol of anti-government dissent. Smith acted as if he supported the protesters, and, slowly, they let him into their online community. Over the next three years, dozens of them accepted his friend requests, allowing him to observe private discussions over marches, rallies and demonstrations.

But Smith was not real. He was the creation of a white detective in the Memphis Police Department’s Office of Homeland Security whose job was to keep tabs on local activists across the spectrum, from Black Lives Matter to Confederate sympathizers.

The detective, Tim Reynolds, outed himself in August under questioning by the American Civil Liberties Union of Tennessee, which sued the police department for allegedly violating a 1978 agreement that prohibited police from conducting surveillance of lawful protests. The revelation validated many activists’ distrust of local authorities. It also provided a rare look into the ways American law enforcement operates online, taking advantage of a loosely regulated social media landscape — and citizens’ casual relinquishing of their privacy — to expand monitoring of the public.

The proliferation of fake Facebook accounts and other means of social media monitoring ─ including the use of software to crunch data about people’s online activity ─ illustrates a policing “revolution” that has allowed authorities to not only track people but also map out their networks, said Rachel Levinson-Waldman, senior counsel at New York University School of Law’s Brennan Center for Justice.

She is among many scholars who worry that expanded social media surveillance could make people less likely to engage in online activities protected by the First Amendment, from sharing their opinions to organizing protests of the government. But there are few laws governing this kind of monitoring. Few courts have taken up the issue. And most police departments don’t have policies on how officers can use social media for investigations, according to Levinson-Waldman’s research.

“It’s pretty open territory,” she said.

771

Mobile Websites Can Tap Into Your Phone’s Sensors Without Asking

When apps wants to access data from your smartphone’s motion or light sensors, they often make that capability clear. That keeps a fitness app, say, from counting your steps without your knowledge. But a team of researchers has discovered that the rules don’t apply to websites loaded in mobile browsers, which can often often access an array of device sensors without any notifications or permissions whatsoever.

That mobile browsers offer developers access to sensors isn’t necessarily problematic on its own. It’s what helps those services automatically adjust their layout, for example, when you switch your phone’s orientation. And the World Wide Web Consortium standards body has codified how web applications can access sensor data. But the researchers—Anupam Das of North Carolina State University, Gunes Acar of Princeton University, Nikita Borisov of the University of Illinois at Urbana-Champaign, and Amogh Pradeep of Northeastern University—found that the standards allow for unfettered access to certain sensors. And sites are using it.

The researchers found that of the top 100,000 sites—as ranked by Amazon-owned analytics company Alexa—3,695 incorporate scripts that tap into one or more of these accessible mobile sensors. That includes plenty of big names, including Wayfair, Priceline.com, and Kayak.

“If you use Google Maps in a mobile browser you’ll get a little popup that says, ‘This website wants to see your location,’ and you can authorize that,” says Borisov. “But with motion, lighting, and proximity sensors there isn’t any mechanism to notify the user and ask for permission, so they’re being accessed and that is invisible to the user. For this collection of sensors there isn’t a permissions infrastructure.”

That unapproved access to motion, orientation, proximity, or light sensor data alone probably wouldn’t compromise a user’s identity or device. And a web page can only access sensors as long as a user is actively browsing the page, not in the background. But the researchers note that on a malicious website, the information could fuel various types of attacks, like using ambient light data to make inferences about a user’s browsing, or using motion sensor data as a sort of keylogger to deduce things like PIN numbers.

In past work, researchers have also shown that they can use the unique calibration features of motion sensors on individual devices to identify and track them across websites. And while the World Wide Web Consortium standards classify data from these sensors as “not sensitive enough to warrant specific sensor permission grants,” the group does acknowledge that there are some potential privacy concerns. “Implementations may consider permissions or visual indicators to signify the use of sensors by the page,” the standard suggests.

The prevalence of ad networks also makes it difficult to get a handle on the issue. The researchers even found three scripts attempting to access user sensors in ad modules on WIRED.com, though at least one had been removed when the researchers rechecked the site for this story. Other media sites, including CNN, the Los Angeles Times, and CNET have ad networks using similar scripts as well.

726

Using Wi-Fi To Count People Through Walls

Whether you’re trying to figure out how many students are attending your lectures or how many evil aliens have taken your Space Force brethren hostage, Wi-Fi can now be used to count them all. The system, created by researchers at UC Santa Barbara, uses a single Wi-Fi router outside of the room to measure attenuation and signal drops. From the release: “The transmitter sends a wireless signal whose received signal strength (RSSI) is measured by the receiver. Using only such received signal power measurements, the receiver estimates how many people are inside the room — an estimate that closely matches the actual number. It is noteworthy that the researchers do not do any prior measurements or calibration in the area of interest; their approach has only a very short calibration phase that need not be done in the same area.” This means that you could simply walk up to a wall and press a button to count, with a high degree of accuracy, how many people are walking around. The system can measure up to 20 people in its current form.

687

India’s Biometric Database Is Creating A Perfect Surveillance State — And U.S. Tech Companies Are On Board

Big U.S. technology companies are involved in the construction of one of the most intrusive citizen surveillance programs in history. For the past nine years, India has been building the world’s biggest biometric database by collecting the fingerprints, iris scans and photos of nearly 1.3 billion people. For U.S. tech companies like Microsoft, Amazon and Facebook, the project, called Aadhaar (which means “proof” or “basis” in Hindi), could be a gold mine. The CEO of Microsoft has repeatedly praised the project, and local media have carried frequent reports on consultations between the Indian government and senior executives from companies like Apple and Google (in addition to South Korean-based Samsung) on how to make tech products Aadhaar-enabled. But when reporters of HuffPost and HuffPost India asked these companies in the past weeks to confirm they were integrating Aadhaar into their products, only one company — Google — gave a definitive response.

That’s because Aadhaar has become deeply controversial, and the subject of a major Supreme Court of India case that will decide the future of the program as early as this month. Launched nine years ago as a simple and revolutionary way to streamline access to welfare programs for India’s poor, the database has become Indians’ gateway to nearly any type of service — from food stamps to a passport or a cell phone connection. Practical errors in the system have caused millions of poor Indians to lose out on aid. And the exponential growth of the project has sparked concerns among security researchers and academics that India is the first step toward setting up a surveillance society to rival China.

785

Phone Numbers Were Never Meant as ID

One key lesson from the recent T-Mobile and several other breaches: our phone numbers, that serve as a means to identity and verify ourselves, are increasingly getting targeted, and the companies are neither showing an appetite to work on an alternative identity management system, nor are they introducing more safeguards to how phone numbers are handled and exchanged. From a report:
Identity management experts have warned for years about over-reliance on phone numbers. But the United States doesn’t offer any type of universal ID, which means private institutions and even the federal government itself have had to improvise. As cell phones proliferated, and phone numbers became more reliably attached to individuals long term, it was an obvious choice to start collecting those numbers even more consistently as a type of ID. But over time, SMS messages, biometric scanners, encrypted apps, and other special functions of smartphones have evolved into forms of authentication as well.

“The bottom line is society needs identifiers,” says Jeremy Grant, coordinator of the Better Identity Coalition, an industry collaboration that includes Visa, Bank of America, Aetna, and Symantec. “We just have to make sure that knowledge of an identifier can’t be used to somehow take over the authenticator. And a phone number is only an identifier; in most cases, it’s public.” Think of your usernames and passwords. The former are generally public knowledge; it’s how people know who you are. But you keep the latter guarded, because it’s how you prove who you are.

The use of phone numbers as both lock and key has led to the rise, in recent years, of so-called SIM swapping attacks, in which an attacker steals your phone number. When you add two-factor authentication to an account and receive your codes through SMS texts, they go to the attacker instead, along with any calls and texts intended for the victim. Sometimes attackers even use inside sources at carriers who will transfer numbers for them.

679

Banks and Retailers Are Tracking How You Type, Swipe and Tap

When you’re browsing a website and the mouse cursor disappears, it might be a computer glitch — or it might be a deliberate test to find out who you are.

The way you press, scroll and type on a phone screen or keyboard can be as unique as your fingerprints or facial features. To fight fraud, a growing number of banks and merchants are tracking visitors’ physical movements as they use websites and apps.

The data collection is invisible to those being watched. Using sensors in your phone or code on websites, companies can gather thousands of data points, known as “behavioral biometrics.”
 


A phone’s touchscreen sensors can track where and how you swipe your device to help determine who you are.

 


The angle at which you hold your device is one of the many biometric markers that can be measured.

 

Behavioral monitoring software churns through thousands of elements to calculate a probability-based guess about whether a person is who they claim. Two major advances have fed its growing use: the availability of cheap computing power and the sophisticated array of sensors now built into most smartphones.

The system’s unobtrusiveness is part of its appeal, Mr. Hanley said. Traditional physical biometrics, like fingerprints or irises, require special scanning hardware for authentication. But behavioral traits can be captured in the background, without customers doing anything to sign up.

BioCatch occasionally tries to elicit a reaction. It can speed up the selection wheel you use to enter data like dates and times on your phone, or make your mouse cursor disappear for a fraction of a second.

“Everyone reacts a little differently to that,” said Frances Zelazny, BioCatch’s chief strategy and marketing officer. “Some people move the mouse side to side; some people move it up and down. Some bang on the keyboard.”

Because your reaction is so individual, it’s hard for a fraudulent user to fake. And because customers never know the monitoring technology is there, it doesn’t impose the kind of visible, and irritating, roadblocks that typically accompany security tests. You don’t need to press your thumb on your phone’s fingerprint reader or type in an authentication code.
 


Biometric software can also determine the pressure you tend to apply to your phone when you tap and type.

“We don’t have to sit people down in a room and get them to type under perfect laboratory conditions,” said Neil Costigan, the chief executive of BehavioSec, a Palo Alto, Calif., company that makes software used by many Nordic banks. “You just watch them, silently, while they go about their normal account activities.”

979

How Fracking Companies Use Facebook Surveillance to Ban Protest

Facebook is being used by oil and gas companies to clamp-down on protest. Three companies are currently seeking injunctions against protesters: British chemical giant INEOS, which has the largest number of shale gas drilling licenses in the UK; and small UK outfits UK Oil and Gas (UKOG), and Europa Oil and Gas. Among the thousands of pages of documents submitted to British courts by these companies are hundreds of Facebook and Twitter posts from anti-fracking protesters and campaign groups, uncovered by Motherboard in partnership with investigative journalists at DeSmog UK. They show how fracking companies are using social media surveillance carried out by a private firm to strengthen their cases in court by discrediting activists using personal information to justify banning their protests.

Included in the evidence supplied by the oil and gas companies to the courts are many personal or seemingly irrelevant campaigner posts. Some are from conversations on Facebook groups dedicated to particular protests or camps, while others have been captured from individuals’ own profile pages. For instance, a picture of a mother with her baby at a protest was submitted as part of the Europa Oil and Gas case. Another screenshot of a post in the Europa bundle shows a hand-written note from one of the protesters’ mothers accompanying a care package with hand-knitted socks that was sent to an anti-fracking camp. One post included in the UKOG hearing bundle shows two protesters sharing a pint in the sun — not at a protest camp, nor shared on any of the campaign pages’ Facebook groups. A screenshot from INEOS’s hearing bundle shows posts from a protester to his own Facebook wall regarding completely unrelated issues such as prescription drugs, and a generic moan about his manager.

It is not always clear how such posts are being used against these activists except to portray them in a bad light, and a judge could disregard them as irrelevant to the case. But their often personal nature raises questions about how these companies were scrutinising the private lives of campaigners to justify shutting down their protests.

In 2011, the UK government ordered a public inquiry led by Lord Justice Leveson into the culture, practices and ethics of the British press after a leading tabloid newspaper was convicted of phone hacking. One of the activists subject to surveillance, Jon O’Houston, who has been part of the Broadford Bridge Protection Camp, said he felt it was equivalent to the phone hacking cases, which led to the Leveson review.
Advertisement

“What’s said in the groups is generally taken either out of context or cherry-picked”, O’Houston told Motherboard. “When taken out of context, you can make anything look bad or good.”

Despite his posts being used to strengthen the case for injunctions against protesters, he said he wouldn’t necessarily change his behaviour on social media.

“I don’t think I’d ever change the way we operate our groups. There’s too much information there already. If someone wants to go back five years and have a look at what was going on in these groups five years ago, they could do that,” he said.

“It would be very difficult if we stopped using Facebook as a platform,” he added. “We would lose so much of that important stuff. In a way, it’s got us trapped.”

752

How Smart TVs in Millions of US Homes Track More Than What’s on Tonight

The growing concern over online data and user privacy has been focused on tech giants like Facebook and devices like smartphones. But people’s data is also increasingly being vacuumed right out of their living rooms via their televisions, sometimes without their knowledge. From a report:

In recent years, data companies have harnessed new technology to immediately identify what people are watching on internet-connected TVs, then using that information to send targeted advertisements to other devices in their homes. Marketers, forever hungry to get their products in front of the people most likely to buy them, have eagerly embraced such practices. But the companies watching what people watch have also faced scrutiny from regulators and privacy advocates over how transparent they are being with users.

Samba TV is one of the bigger companies that track viewer information to make personalized show recommendations. The company said it collected viewing data from 13.5 million smart TVs in the United States, and it has raised $40 million in venture funding from investors including Time Warner, the cable operator Liberty Global and the billionaire Mark Cuban. Samba TV has struck deals with roughly a dozen TV brands — including Sony, Sharp, TCL and Philips — to place its software on certain sets. When people set up their TVs, a screen urges them to enable a service called Samba Interactive TV, saying it recommends shows and provides special offers “by cleverly recognizing onscreen content.” But the screen, which contains the enable button, does not detail how much information Samba TV collects to make those recommendations…. Once enabled, Samba TV can track nearly everything that appears on the TV on a second-by-second basis, essentially reading pixels to identify network shows and ads, as well as programs on Netflix and HBO and even video games played on the TV.

769

MIT’s AI Can Track Humans Through Walls With Just a Wifi Signal

Researchers at the Massachusetts Institute of Technology have developed a new piece of software that uses wifi signals to monitor the movements, breathing, and heartbeats of humans on the other side of walls. While the researchers say this new tech could be used in areas like remote healthcare, it could in theory be used in more dystopian applications.

“We actually are tracking 14 different joints on the body […] the head, the neck, the shoulders, the elbows, the wrists, the hips, the knees, and the feet,” Dina Katabi, an electrical engineering and computer science teacher at MIT, said. “So you can get the full stick-figure that is dynamically moving with the individuals that are obstructed from you — and that’s something new that was not possible before.” The technology works a little bit like radar, but to teach their neural network how to interpret these granular bits of human activity, the team at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) had to create two separate A.I.s: a student and a teacher.

The team developed one A.I. program that monitored human movements with a camera, on one side of a wall, and fed that information to their wifi X-ray A.I., called RF-Pose, as it struggled to make sense of the radio waves passing through that wall on the other side. The research builds off of a longstanding project at CSAIL lead by Katabi, which hopes to use this wifi tracking to help passively monitor the elderly and automate any emergency alerts to EMTs and medical professionals if they were to fall or suffer some other injury.

Massachusetts Institute of Technology behind the innovation has previously received funding from the Pentagon’s Defense Advanced Research Projects Agency (DARPA). Another also presented work at a security research symposium curated by a c-suite member of In-Q-Tel, the CIA’s high-tech venture capital firm.

859

“5 Years on, US Government Still Counting Snowden Leak Costs”

The top U.S. counterintelligence official said journalists have released only about 1 percent taken by the 34-year-old American, … “This past year, we had more international, Snowden-related documents and breaches than ever,” Bill Evanina, who directs the National Counterintelligence and Security Center, said at a recent conference. “Since 2013, when Snowden left, there have been thousands of articles around the world with really sensitive stuff that’s been leaked.”

18 November 2016. DoD claims 1,700,000 files, ~.04% of that released. ACLU lists 525 pages released by the press. However, if as The Washington Post reported, a minimum of 250,000 pages are in the Snowden files, then less than 1% have been released.

786

Google plan for data-driven ‘smart city’ sparks privacy, democracy concerns

In the Canadian city of Toronto, city officials are negotiating a project that will give a section of the city’s waterfront to the US tech giant Google. Under the arrangement, Google affiliate Sidewalk Labs will build and run a high-tech “radical mixed-use” site called Quayside. This “smart city” plan involves creating a neighbourhood “from the internet up”, powered and served by data; with sensors monitoring everything from air quality to pedestrian traffic, even the flushing of toilets. Amenities like garbage disposal and goods delivery are to be coordinated and driven by AI and robotics.

The proposed parcel of land isn’t huge, but it’s not insubstantial either – it covers about half-a-square-kilometre, and there are already suggestions it could be extended.

For Eric Schmidt, executive chairman of Alphabet — the parent company of both Google and Sidewalk Labs — it’s the culmination of a long-held ambition.

“Give us a city and put us in charge,” he once famously declared.

Following the Facebook/Cambridge Analytica scandal, some, like Dr Jathan Sadowski at the University of Sydney, worry about the implications of putting a private tech company in charge of both urban development and urban life.

“What’s in it for them? It’s data,” he says. “It allows them to get really massive amounts of granular data about urban life and urban environments.”

“You’ll have a city that’s based on, or built around, proprietary platforms, data harvesting, corporate control.”

707

Targeted advertising hits emergency rooms

Patients sitting in emergency rooms, at chiropractors’ offices and at pain clinics in the Philadelphia area may start noticing on their phones the kind of messages typically seen along highway billboards and public transit: personal injury law firms looking for business by casting mobile online ads at patients.

The potentially creepy part? They’re only getting fed the ad because somebody knows they are in an emergency room.

The technology behind the ads, known as geofencing, or placing a digital perimeter around a specific location, has been deployed by retailers for years to offer coupons and special offers to customers as they shop. Bringing it into health care spaces, however, is raising alarm among privacy experts.

782

High School in China Installs Facial Recognition Cameras to Monitor Students’ Attentiveness

A high school in Hangzhou City, Zhejiang Province located on the eastern coast of China, has employed facial recognition technology to monitor students’ attentiveness in class.

At Hangzhou Number 11 High School, three cameras at the front of the classroom scan students’ faces every 30 seconds, analyzing their facial expressions to detect their mood, according to a May 16 report in the state-run newspaper The Paper.

The different moods—surprised, sad, antipathy, angry, happy, afraid, neutral—are recorded and averaged during each class.

A display screen, only visible to the teacher, shows the data in real-time. A certain value is determined as a student not paying enough attention.

A video shot by Zhejiang Daily Press revealed that the system—coined the “smart classroom behavior management system” by the school—also analyzes students’ actions, categorized into: reading, listening, writing, standing up, raising hands, and leaning on the desk.

An electronic screen also displays a list of student names deemed “not paying attention.”

The school began using the technology at the end of March, vice principal Zhang Guanchao told The Paper. Zhang added that students felt like they were being monitored when the system was first put in place, but have since gotten used to it.

861

Facebook accused of conducting mass surveillance through its apps

Facebook used its apps to gather information about users and their friends, including some who had not signed up to the social network, reading their text messages, tracking their locations and accessing photos on their phones, a court case in California alleges.

The claims of what would amount to mass surveillance are part of a lawsuit brought against the company by the former startup Six4Three, listed in legal documents filed at the superior court in San Mateo as part of a court case that has been ongoing for more than two years.

It alleges that Facebook used a range of methods, some adapted to the different phones that users carried, to collect information it could use for commercial purposes.

“Facebook continued to explore and implement ways to track users’ location, to track and read their texts, to access and record their microphones on their phones, to track and monitor their usage of competitive apps on their phones, and to track and monitor their calls,” one court document says.

But all details about the mass surveillance scheme have been redacted on Facebook’s request in Six4Three’s most recent filings. Facebook claims these are confidential business matters.

Other alleged projects included one to remotely activate Bluetooth, allowing the company to pinpoint a user’s location without them explicitly agreeing to it. Another involved the development of privacy settings with an early end date that was not flagged to users, letting them expire without notice, the court documents claim.

Facebook admitted recently that it had collected call and text message data from users, but said it only did so with prior consent. However the Guardian has reported that it logged some messages without explicitly notifying users. The company could not see text messages for iPhone users but could access any photos taken on a phone or stored on the built-in “camera roll” archive system, the court case alleged. It has not disclosed how they were analysed.

Facebook has not fully disclosed the manner in which it pre-processes photos on the iOS camera roll, meaning if a user has any Facebook app installed on their iPhone, then Facebook accesses and analyses the photos the user takes and/or stores on the iPhone, the complainant alleges.

753