Resources

Stare Into The Lights My Pretties

Parents are using GPS ankle monitors to track their teenagers like criminals

There’s no shortage of GPS trackers for parents who want to keep tabs on their children’s driving. After all, car accidents are the leading cause of death for American teens. For some parents, that’s enough.

For others, nothing but a full-on ankle monitor—the kind used to track people released on bail or parole—will do.
Frank Kopczynski, the owner of Tampa Bay Monitoring (in fact located in Clearwater, Florida), also runs Action Plus Bail Bonds. He said the biggest challenge in strapping an ankle monitor to teenage non-offenders is whether they can remove it themselves.

“We provide a bracelet that is near-impossible to cut off,” Kopczynski told Quartz. “It also allows us to have two-way communication and gives us the option of sounding a piercing alarm.”

Along with the 95-decibel siren, if a teen is out past curfew, their parents can call Tampa Bay Monitoring’s office, and Kopczynski or one of his employees will activate the ankle monitor’s speaker and tell the child it’s time to get home or the police will be called. Hearing “this god-like voice out of nowhere” is generally effective, said Kopczynski; since the system is two-way, staff can also monitor the teen covertly.

The electronic monitoring industry has more than doubled in size in recent years, and has expanded well beyond its initial market of people on bail or parole. For example, immigrants detained by US authorities and put on an electronic monitoring program are required to pay extremely high fees—$880 for activation plus $420 a month— while they await their hearings.

As Facebook Raised a Privacy Wall, It Carved an Opening for Tech Giants

Internal documents show that the social network gave Microsoft, Amazon, Spotify and others far greater access to people’s data than it has disclosed.

For years, Facebook gave some of the world’s largest technology companies more intrusive access to users’ personal data than it has disclosed, effectively exempting those business partners from its usual privacy rules, according to internal records and interviews.

The special arrangements are detailed in hundreds of pages of Facebook documents obtained by The New York Times. The records, generated in 2017 by the company’s internal system for tracking partnerships, provide the most complete picture yet of the social network’s data-sharing practices. They also underscore how personal data has become the most prized commodity of the digital age, traded on a vast scale by some of the most powerful companies in Silicon Valley and beyond.

Facebook allowed Microsoft’s Bing search engine to see the names of virtually all Facebook users’ friends without consent, the records show, and gave Netflix and Spotify the ability to read Facebook users’ private messages.

The social network permitted Amazon to obtain users’ names and contact information through their friends, and it let Yahoo view streams of friends’ posts as recently as this summer, despite public statements that it had stopped that type of sharing years earlier.

Facebook has been reeling from a series of privacy scandals, set off by revelations in March that a political consulting firm, Cambridge Analytica, improperly used Facebook data to build tools that aided President Trump’s 2016 campaign. Acknowledging that it had breached users’ trust, Facebook insisted that it had instituted stricter privacy protections long ago. Mark Zuckerberg, the chief executive, assured lawmakers in April that people “have complete control” over everything they share on Facebook.

[Facebook’s strategy in times of crisis: delay, deny and deflect.]

Facebook began forming data partnerships when it was still a relatively young company. Mr. Zuckerberg was determined to weave Facebook’s services into other sites and platforms, believing it would stave off obsolescence and insulate Facebook from competition. Every corporate partner that integrated Facebook data into its online products helped drive the platform’s expansion, bringing in new users, spurring them to spend more time on Facebook and driving up advertising revenue. At the same time, Facebook got critical data back from its partners.

The partnerships were so important that decisions about forming them were vetted at high levels, sometimes by Mr. Zuckerberg and Sheryl Sandberg, the chief operating officer, Facebook officials said. While many of the partnerships were announced publicly, the details of the sharing arrangements typically were confidential.

Facebook also allowed Spotify, Netflix and the Royal Bank of Canada to read, write and delete users’ private messages, and to see all participants on a thread — privileges that appeared to go beyond what the companies needed to integrate Facebook into their systems, the records show. Facebook acknowledged that it did not consider any of those three companies to be service providers. Spokespeople for Spotify and Netflix said those companies were unaware of the broad powers Facebook had granted them. A spokesman for Netflix said Wednesday that it had used the access only to enable customers to recommend TV shows and movies to their friends.

A Royal Bank of Canada spokesman disputed that the bank had had any such access. (Aspects of some sharing partnerships, including those with the Royal Bank of Canada and Bing, were first reported by The Wall Street Journal.)

Spotify, which could view messages of more than 70 million users a month, still offers the option to share music through Facebook Messenger. But Netflix and the Canadian bank no longer needed access to messages because they had deactivated features that incorporated it.

These were not the only companies that had special access longer than they needed it. Yahoo, The Times and others could still get Facebook users’ personal information in 2017.

Yahoo could view real-time feeds of friends’ posts for a feature that the company had ended in 2012. A Yahoo spokesman declined to discuss the partnership in detail but said the company did not use the information for advertising. The Times — one of nine media companies named in the documents — had access to users’ friend lists for an article-sharing application it had discontinued in 2011. A spokeswoman for the news organization said it was not obtaining any data.

Facebook’s internal records also revealed more about the extent of sharing deals with over 60 makers of smartphones, tablets and other devices, agreements first reported by The Times in June.

Facebook empowered Apple to hide from Facebook users all indicators that its devices were asking for data. Apple devices also had access to the contact numbers and calendar entries of people who had changed their account settings to disable all sharing, the records show.

Apple officials said they were not aware that Facebook had granted its devices any special access. They added that any shared data remained on the devices and was not available to anyone other than the users.

Stare Into The Lights My Pretties

Amazon error allowed Alexa user to eavesdrop on another home

A user of Amazon’s Alexa voice assistant in Germany got access to more than a thousand recordings from another user because of “a human error” by the company.

The customer had asked to listen back to recordings of his own activities made by Alexa but he was also able to access 1,700 audio files from a stranger when Amazon sent him a link, German trade publication c’t reported.

On the recordings, a man and a female companion could be overheard in his home and the magazine was able to identify and contact him through the recorded information, according to the report.

Facebook has Filed a Patent To Calculate Your Future Location

Facebook has filed several patent applications with the U.S. Patent and Trademark Office for technology that uses your location data to predict where you’re going and when you’re going to be offline.

A May 30, 2017, Facebook application titled “Offline Trajectories” describes a method to predict where you’ll go next based on your location data. The technology described in the patent would calculate a “transition probability based at least in part on previously logged location data associated with a plurality of users who were at the current location.” In other words, the technology could also use the data of other people you know, as well as that of strangers, to make predictions. If the company could predict when you are about to be in an offline area, Facebook content “may be prefetched so that the user may have access to content during the period where there is a lack of connectivity.”

Another Facebook patent application titled “Location Prediction Using Wireless Signals on Online Social Networks” describes how tracking the strength of Wi-Fi, Bluetooth, cellular, and near-field communication (NFC) signals could be used to estimate your current location, in order to anticipate where you will go next. This “background signal” information is used as an alternative to GPS because, as the patent describes, it may provide “the advantage of more accurately or precisely determining a geographic location of a user.” The technology could learn the category of your current location (e.g., bar or gym), the time of your visit to the location, the hours that entity is open, and the popular hours of the entity.

Yet another Facebook patent application, “Predicting Locations and Movements of Users Based on Historical Locations for Users of an Online System,” further details how location data from multiple people would be used to glean location and movement trends and to model location chains. According to the patent application, these could be used for a “variety of applications,” including “advertising to users based on locations and for providing insights into the movements of users.” The technology could even differentiate movement trends among people who live in a city and who are just visiting a city.

Facebook Privacy Social Networks Internal Emails Show Facebook Weighing the Privacy Risks of Quietly Collecting Call and Text Records From Its Android Users—Then Going Ahead Anyway

Earlier this year, many Android users were shocked to discover that Facebook had been collecting a record of their call and SMS history, as revealed by the company’s data download tool. Now, internal emails released by the UK Parliament show how the decision was made internally.

According to the emails, developers knew the data was sensitive, but they still pushed to collect it as a way of expanding Facebook’s reach. The emails show Facebook’s growth team looking to call log data as a way to improve Facebook’s algorithms as well as to locate new contacts through the “People You May Know” feature. Notably, the project manager recognized it as “a pretty high-risk thing to do from a PR perspective,” but that risk seems to have been overwhelmed by the potential user growth.

Initially, the feature was intended to require users to opt in, typically through an in-app pop-up dialog box. But as developers looked for ways to get users signed up, it became clear that Android’s data permissions could be manipulated to automatically enroll users if the new feature was deployed in a certain way.

AI Mistakes Ad On a Bus For an Actual CEO, Then Publicly Shames Them For ‘Jaywalking’

Since last year, many Chinese cities have cracked down on jaywalking by investing in facial recognition systems and AI-powered surveillance cameras. Jaywalkers are identified and shamed by displaying their photographs on large public screens… Developments are also underway to engage the country’s mobile network operators and social media platforms, such as Tencent Holdings’ WeChat and Sina Weibo, to establish a system in which offenders will receive personal text messages as soon as they are caught violating traffic rules….

Making a compelling case for change is the recent experience of Dong Mingzhu, chairwoman of China’s biggest maker of air conditioners Gree Electric Appliances, who found her face splashed on a huge screen erected along a street in the port city of Ningbo… That artificial intelligence-backed surveillance system, however, erred in capturing Dong’s image on Wednesday from an advertisement on the side of a moving bus. The traffic police in Ningbo, a city in the eastern coastal province of Zhejiang, were quick to recognise the mistake, writing in a post on microblog Sina Weibo on Wednesday that it had deleted the snapshot. It also said the surveillance system would be completely upgraded to cut incidents of false recognition in future.

My devices are sending and receiving data every two seconds, sometimes even when I sleep

blockquote>When I decided to record every time my phone or laptop contacted a server on the internet, I knew I’d get a lot of data, but I honestly didn’t think it would reveal nearly 300,000 requests in a single week.

On average, that’s about one request every two seconds.

Are your devices sending and receiving data when you’re not using them?

They sure are. The quietest times fall — predictably — overnight. But even while I’m sleeping my devices are pretty busy talking to various companies. For example, here are the 841 times my devices made contact with 46 different domains between 10pm and 6:30am on the second night of the experiment. Most of these requests are background updates for things like my email and calendar or synchronisation that various apps like Dropbox or iCloud perform.

But exactly what each of them is doing is quite difficult to tell.

French Officer Caught Selling Access To State Surveillance Systems

A French police officer has been charged and arrested last week for selling confidential data on the dark web in exchange for Bitcoin,” reports ZDNet. French authorities caught him after they took down the “Black Hand” dark web marketplace. Sifting through the marketplace data, they found French police documents sold on the site. All the documents had unique identifiers, which they used to track down the French police officer who was selling the data under the name of Haurus.

Besides selling access to official docs, they also found he ran a service to track the location of mobile devices based on a supplied phone number. He advertised the system as a way to track spouses or members of competing criminal gangs. Investigators believe Haurus was using the French police resources designed with the intention to track criminals for this service. He also advertised a service that told buyers if they were tracked by French police and what information officers had on them.

Facebook Filed A Patent To Predict Your Household’s Demographics Based On Family Photos

Facebook has submitted a patent application for technology that would predict who your family and other household members are, based on images and captions posted to Facebook, as well as your device information, like shared IP addresses. The application, titled “Predicting household demographics based on image data,” was originally filed May 10, 2017, and made public today.

The system Facebook proposes in its patent application would use facial recognition and learning models trained to understand text to help Facebook better understand whom you live with and interact with most. The technology described in the patent looks for clues in your profile pictures on Facebook and Instagram, as well as photos of you that you or your friends post.

It would note the people identified in a photo, and how frequently the people are included in your pictures. Then, it would assess information from comments on the photos, captions, or tags (#family, #mom, #kids) — anything that indicates whether someone is a husband, daughter, cousin, etc. — to predict what your family/household actually looks like. According to the patent application, Facebook’s prediction models would also analyze “messaging history, past tagging history, [and] web browsing history” to see if multiple people share IP addresses (a unique identifier for every internet network).

Scope creep with Australia metadata retention

Telecommunications industry group Communications Alliance has revealed details of dozens of state and federal departments and agencies it claims are accessing so-called communications ‘metadata’.

The 2015 legislation that introduced the data retention regime authorised a list of “criminal law-enforcement agencies” to obtain warrant-free access to metadata. Those agencies included federal, state and territory police agencies, a number of anti-corruption bodies, Border Force, the Australian Securities and Investments Commission; and the Australian Competition and Consumer Commission.

However, last month at the hearing of an inquiry into the government’s bill aimed at enhancing police access to encrypted communications services, Communications Alliance CEO John Stanton said that a significantly larger number of organisations were accessing information kept by telcos to meet their data retention obligations.

In addition to police agencies and other organisations listed in the data retention legislation, it includes Centrelink, the Australian Taxation Office, Australia Post’s Corporate Security Group, Workplace Health and Safety, Work Safe Victoria, the Taxi Services Commission and a number of local councils.

Australia’s near-real-time facial recognition system, chilling effects

Civil rights groups have warned a vast, powerful system allowing the near real-time matching of citizens’ facial images risks a “profound chilling effect” on protest and dissent.

The technology – known in shorthand as “the capability” – collects and pools facial imagery from various state and federal government sources, including driver’s licences, passports and visas.

The biometric information can then rapidly – almost in real time – be compared with other sources, such as CCTV footage, to match identities.

The system, chiefly controlled by the federal Department of Home Affairs, is designed to give intelligence and security agencies a powerful tool to deter identity crime, and quickly identify terror and crime suspects.

But it has prompted serious concern among academics, human rights groups and privacy experts. The system sweeps up and processes citizens’ sensitive biometric information regardless of whether they have committed or are suspected of an offence.

Chinese ‘Gait Recognition’ Tech IDs People By How They Walk; Police Have Started Using It on Streets of Beijing and Shanghai

Already used by police on the streets of Beijing and Shanghai, “gait recognition” is part of a push across China to develop artificial-intelligence and data-driven surveillance that is raising concern about how far the technology will go. Huang Yongzhen, the CEO of Watrix, said that its system can identify people from up to 50 meters (165 feet) away, even with their back turned or face covered. This can fill a gap in facial recognition, which needs close-up, high-resolution images of a person’s face to work. “You don’t need people’s cooperation for us to be able to recognize their identity,” Huang said in an interview in his Beijing office. “Gait analysis can’t be fooled by simply limping, walking with splayed feet or hunching over, because we’re analyzing all the features of an entire body.”

Pentagon Wants to Predict Anti-Trump Protests Using Social Media Surveillance

A series of research projects, patent filings, and policy changes indicate that the Pentagon wants to use social media surveillance to quell domestic insurrection and rebellion.

The social media posts of American citizens who don’t like President Donald Trump are the focus of the latest US military-funded research. The research, funded by the US Army and co-authored by a researcher based at the West Point Military Academy, is part of a wider effort by the Trump administration to consolidate the US military’s role and influence on domestic intelligence.

The vast scale of this effort is reflected in a number of government social media surveillance patents granted this year, which relate to a spy program that the Trump administration outsourced to a private company last year. Experts interviewed by Motherboard say that the Pentagon’s new technology research may have played a role in amendments this April to the Joint Chiefs of Staff homeland defense doctrine, which widen the Pentagon’s role in providing intelligence for domestic “emergencies,” including an “insurrection.”

It’s no secret that the Pentagon has funded Big Data research into how social media surveillance can help predict large-scale population behaviours, specifically the outbreak of conflict, terrorism, and civil unrest.

Much of this research focuses on foreign theatres like the Middle East and North Africa — where the 2011 Arab Spring kicked off an arc of protest that swept across the region and toppled governments.

Since then, the Pentagon has spent millions of dollars finding patterns in posts across platforms like Facebook, Twitter, Instagram, Tumblr, and beyond to enable the prediction of major events.

But the Pentagon isn’t just interested in anticipating surprises abroad. The research also appears to be intended for use in the US homeland.

Datasets for the research were collected using the Apollo Social Sensing Tool, a real-time event tracking software that collects and analyses millions of social media posts.

The tool was originally developed under the Obama administration back in 2011 by the US Army Research Laboratory and US Defense Threat Reduction Agency, in partnership with Rensselaer Polytechnic Institute, the University of Illinois, IBM, and Caterva (a social marketing company that in 2013 was folded into a subsidiary of giant US government IT contractor, CSC). Past papers associated with the project show that the tool has been largely tested in foreign theatres like Haiti, Egypt, and Syria.

But the use of the Apollo tool to focus on protests in the US homeland has occurred under the Trump administration. The ‘election’ dataset compiled using Apollo for the 2018 US Army-funded study is comprised of 2.5 million tweets sent between October 26, 2016, and December 20, 2016, using the words “Trump”, “Clinton,” and “election.”

Tweets were geolocated to focus on “locations where protests occurred following the election” based on user profiles. Locations were then triangulated against protest data from “online news outlets across the country.”

The millions of tweets were used to make sense of the “frequencies of the protests in 39 cities” using 18 different ways of measuring the “size, structure and geography” of a network, along with two ways of measuring how that network leads a social group to become “mobilized,” or take action.

In short, this means that “the social network can be a predictor of mobilization, which in turn is a predictor of the protest.” This pivotal finding means that extensive real-time monitoring of American citizens’ social media activity can be used to predict future protests.

Suspicious packages spotlight vast mail surveillance system, United States

The U.S. Postal Service regularly photographs the front and back of every piece of U.S. mail, or about 150 billion parcels, envelopes, and postcards every year. A longstanding practice known as the “mail cover” program enables law enforcement to obtain address information and images of the outsides of mail as part of an investigation without the need for a warrant through the Postal Inspection Service, the U.S. Postal Service’s policing arm.

As part of the mail cover program, mail is routinely digitally photographed as part of the sorting process and even available for recipients to digitally preview in some areas. Apart from threats like bombs, the department says its main focus is on mail theft, fraud, and narcotics cases.

Because a mail cover involves reading only information on the outside of the envelope or package, courts have not ruled it a violation of the Fourth Amendment. But that hasn’t stilled concerns about privacy and abuse.

For decades, the relatively obscure program has come under criticism for its lack of protections, for allowing data to be shared in broader cases than postal regulations allow, and for operating largely outside of public view. Critics have also warned that extensive surveillance of someone’s mail, especially combined with other surveillance, could create privacy violations.

After an audit, the Postal Service inspector general determined in 2014 that the Inspection Service did not have “sufficient controls” in place to ensure that its employees followed the agency’s policies in handling national security mail covers.

“Insufficient controls over the mail covers program could hinder the Postal Inspection Service’s ability to conduct effective investigations, lead to public concerns over privacy of mail, and harm the Postal Service’s brand,” it warned in a report.

Mail covers drew fire in the 1970s as part of government surveillance of groups deemed subversive, after a 15-year-old girl was apparently put under surveillance for writing a letter to a radical group. She had reportedly actually intended to contact a similarly named organization as part of a homework assignment.

With 5G, you won’t just be watching video. It’ll be watching you, too

What happens when movies can direct themselves? Remember the last time you felt terrified during a horror movie? Take that moment, and all the suspense leading up to it, and imagine it individually calibrated for you. It’s a terror plot morphing in real time, adjusting the story to your level of attention to lull you into a comfort zone before unleashing a personally timed jumpscare.

Or maybe being scared witless isn’t your idea of fun. Think of a rom-com that stops from going off the rails when it sees you rolling your eyes. Or maybe it tweaks the eye color of that character finally finding true love so it’s closer to your own, a personalized subtlety to make the love-struck protagonist more relatable.

You can thank (or curse) 5G for that.

When most people think of 5G, they’re envisioning an ultra-fast, high-bandwidth connection that lets you download seasons of your favorite shows in minutes. But 5G’s possibilities go way beyond that, potentially reinventing how we watch video, and opening up a mess of privacy uncertainties.

“Right now you make a video much the same way you did for TV,” Dan Garraway, co-founder of interactive video company Wirewax, said in an interview this month. “The dramatic thing is when you turn video into a two-way conversation. Your audience is touching and interacting inside the experience and making things happen as a result.” The personalized horror flick or tailored rom-com? They would hinge on interactive video layers that use emotional analysis based on your phone’s front-facing camera to adjust what you’re watching in real time. You may think it’s far-fetched, but one of key traits of 5G is an ultra-responsive connection with virtually no lag, meaning the network and systems would be fast enough to react to your physical responses.

Before you cast a skeptical eye at 5G, consider how the last explosion of mobile connectivity, from 3G to 4G LTE, changed how we consumed video. Being able to watch — and in YouTube’s case, upload — video on a mobile device reimagined how we watch TV and the types of programming that are big business. A decade ago, when Netflix was about two years into its transition to streaming from DVD mailings, its annual revenue $1.4 billion. This year it’s on track for more than 10 times that ($15.806 billion).

5G’s mobility can bring video experiences to new locations. Spare gives the example straight out of Minority Report, of entering a Gap retail store and being greeted by name. But taken further, the store could develop a three-dimensional video concierge for your phone — a pseudo-hologram that helps you find what you’re looking for. With 5G’s ability to make virtual and augmented reality more accessible, you could get a snapshot of what an outfit might look like on you without having to try it on.

Where things get crazy — and creepy — is imagining how 5G enables video to react to your involuntary cues and all the data you unconsciously provide. A show could mimic the weather or time of day to more closely match the atmosphere in real life.

For all the eye-popping possibilities, 5G unleashes a tangle of privacy questions. 5G could leverage every piece of visual information a phone can see on cameras front and back in real time. This level of visual imagery collection could pave the way for video interaction to happen completely automatically.

It’s also a potential privacy nightmare. But the lure of billions of dollars have already encouraged companies to make privacy compromises.

And that may make it feel like your personalized horror show is already here.

Now Apps Can Track You Even After You Uninstall Them

If it seems as though the app you deleted last week is suddenly popping up everywhere, it may not be mere coincidence. Companies that cater to app makers have found ways to game both iOS and Android, enabling them to figure out which users have uninstalled a given piece of software lately—and making it easy to pelt the departed with ads aimed at winning them back.

Adjust, AppsFlyer, MoEngage, Localytics, and CleverTap are among the companies that offer uninstall trackers, usually as part of a broader set of developer tools. Their customers include T-Mobile US, Spotify Technology, and Yelp. (And Bloomberg Businessweek parent Bloomberg LP, which uses Localytics.) Critics say they’re a fresh reason to reassess online privacy rights and limit what companies can do with user data.

Uninstall tracking exploits a core element of Apple Inc.’s and Google’s mobile operating systems: push notifications. Developers have always been able to use so-called silent push notifications to ping installed apps at regular intervals without alerting the user—to refresh an inbox or social media feed while the app is running in the background, for example. But if the app doesn’t ping the developer back, the app is logged as uninstalled, and the uninstall tracking tools add those changes to the file associated with the given mobile device’s unique advertising ID, details that make it easy to identify just who’s holding the phone and advertise the app to them wherever they go.

At its best, uninstall tracking can be used to fix bugs or otherwise refine apps without having to bother users with surveys or more intrusive tools. But the ability to abuse the system beyond its original intent exemplifies the bind that accompanies the modern internet, says Gillula. To participate, users must typically agree to share their data freely, probably forever, not knowing exactly how it may be used down the road. “As an app developer, I would expect to be able to know how many people have uninstalled an app,” he says. “I would not say that, as an app developer, you have a right to know exactly who installed and uninstalled your app.”

A Future Where Everything Becomes a Computer Is As Creepy As You Feared

More than 40 years ago, Bill Gates and Paul Allen founded Microsoft with a vision for putting a personal computer on every desk. […] In recent years, the tech industry’s largest powers set their sights on a new target for digital conquest. They promised wild conveniences and unimaginable benefits to our health and happiness. There’s just one catch, which often goes unstated: If their novelties take off without any intervention or supervision from the government, we could be inviting a nightmarish set of security and privacy vulnerabilities into the world. And guess what. No one is really doing much to stop it. The industry’s new goal? Not a computer on every desk nor a connection between every person, but something grander: a computer inside everything, connecting everyone.

Cars, door locks, contact lenses, clothes, toasters, refrigerators, industrial robots, fish tanks, sex toys, light bulbs, toothbrushes, motorcycle helmets — these and other everyday objects are all on the menu for getting “smart.” Hundreds of small start-ups are taking part in this trend — known by the marketing catchphrase “the internet of things” — but like everything else in tech, the movement is led by giants, among them Amazon, Apple and Samsung. [American cryptographer and computer security professional Bruce Schneier] argues that the economic and technical incentives of the internet-of-things industry do not align with security and privacy for society generally. Putting a computer in everything turns the whole world into a computer security threat. […] Mr. Schneier says only government intervention can save us from such emerging calamities. “I can think of no industry in the past 100 years that has improved its safety and security without being compelled to do so by government.”

Facial recognition used to identify and catalogue animals

Salmon are just the latest entry in a growing cornucopia of animal faces loaded into databases. For some animals, the biometric data gathered from them is being used to aid in conservation efforts. For others, the resulting AI could help ward off poachers. While partly creepy and partly very cute, monitoring of these animals can both help protect their populations and ensure safe, traceable livestock for developing communities…

U.K. researchers are using online resources like Flickr and Instagram to help build and strengthen a database that will eventually help track global tiger populations in real time. Once collected, the photos are analyzed by everyday people in a free app called Wildsense… The mighty lion is being surveilled too. Conservationists and wildlife teachers are using facial recognition to keep tabs on a database of over 1,000 lions… Wildlife experts are tracking elephants to protect them from encroaching poachers. Using Google’s Cloud AutoML Vision machine learning software, the technology will uniquely identify elephants in the wild. According to the Evening Standard, the tech will even send out an alert if it detects poachers in the same frame.

The story of whale facial tracking is one of crowdsourcing success. After struggling to distinguish specific whales from one another on his own, marine biologist Christian Khan uploaded the photos to data-competition site Kaggle and, within four months, data-science company Deepsense was able to accurately detect individual whale faces with 87% accuracy. Since then, detection rates have steadily improved and are helping conservationists track and monitor the struggling aquatic giant.

U.S. researchers are trying to protect “the world’s most endangered animal” with LemurFaceID, which is able to accurately differentiate between two lemur faces with 97% accuracy. But “In the livestock surveillance arms race China is definitely leading the charge,” the article notes, citing e-commerce giant JD.com and its use of facial recognition to monitor herds of pigs to detect their age, weight, and diet.

And one Chinese company even offers a blockchain-based chicken tracking system (codenamed “GoGo Chicken”) with an app that can link a grocery store chicken to “its birthplace, what food it ate and how many steps it walked during its life.”

Bloomberg’s Spy Chip Story Reveals the Murky World of National Security Reporting

Chinese spies reportedly infiltrated the supply chain and installed tiny chips the size of a pencil tip on the motherboards built by Supermicro, which are used in data center servers across the U.S. tech industry — from Apple to Amazon. That chip can compromise data on the server, allowing China to spy on some of the world’s most wealthy and powerful companies. Apple, Amazon and Supermicro — and the Chinese government — strenuously denied the allegations. Apple also released its own standalone statement later in the day, as did Supermicro.

Welcome to the murky world of national security reporting.

I’ve covered cybersecurity and national security for about five years, most recently at CBS, where I reported exclusively on several stories — including the U.S. government’s covert efforts to force tech companies to hand over their source code in an effort to find vulnerabilities and conduct surveillance. And last year I revealed that the National Security Agency had its fifth data breach in as many years, and classified documents showed that a government data collection program was far wider than first thought and was collecting data on U.S. citizens.

Even with this story, my gut is mixed.

In the aftermath of the disclosure of PRISM, the NSA’s data pulling program that implicated several tech companies — including Apple, but not Amazon — the companies came out fighting, vehemently denying any involvement or connection. Was it a failure of reporting? Partially, yes. But the companies also had plausible deniability by cherry picking what they rebuffed. Despite a claim by the government that PRISM had “direct access” to tech companies’ servers, the companies responded that this wasn’t true. They didn’t, however, refute indirect access — which the companies wouldn’t be allowed to say in any case.

Facebook Is Teeming With Fake Accounts Created By Undercover Cops

In the summer of 2015, as Memphis exploded with protests over the police killing of a 19-year-old man, activists began hearing on Facebook from someone called Bob Smith. The name was generic, and so was his profile picture: a Guy Fawkes mask, the symbol of anti-government dissent. Smith acted as if he supported the protesters, and, slowly, they let him into their online community. Over the next three years, dozens of them accepted his friend requests, allowing him to observe private discussions over marches, rallies and demonstrations.

But Smith was not real. He was the creation of a white detective in the Memphis Police Department’s Office of Homeland Security whose job was to keep tabs on local activists across the spectrum, from Black Lives Matter to Confederate sympathizers.

The detective, Tim Reynolds, outed himself in August under questioning by the American Civil Liberties Union of Tennessee, which sued the police department for allegedly violating a 1978 agreement that prohibited police from conducting surveillance of lawful protests. The revelation validated many activists’ distrust of local authorities. It also provided a rare look into the ways American law enforcement operates online, taking advantage of a loosely regulated social media landscape — and citizens’ casual relinquishing of their privacy — to expand monitoring of the public.

The proliferation of fake Facebook accounts and other means of social media monitoring ─ including the use of software to crunch data about people’s online activity ─ illustrates a policing “revolution” that has allowed authorities to not only track people but also map out their networks, said Rachel Levinson-Waldman, senior counsel at New York University School of Law’s Brennan Center for Justice.

She is among many scholars who worry that expanded social media surveillance could make people less likely to engage in online activities protected by the First Amendment, from sharing their opinions to organizing protests of the government. But there are few laws governing this kind of monitoring. Few courts have taken up the issue. And most police departments don’t have policies on how officers can use social media for investigations, according to Levinson-Waldman’s research.

“It’s pretty open territory,” she said.