Resources

US courts must stop shielding government surveillance programs from accountability

Imagine the government has searched your home without a warrant or probable cause, rifling through your files, your bedroom dresser, your diary. You sue, arguing that the public record shows it violated your fourth amendment rights. The government claims that it has a defense, but that its defense is secret. The court dismisses the case.

That’s precisely what the federal government has increasingly said it can do in cases related to national security – under the so-called “state secret privilege”. It can violate constitutional rights, and then defeat any effort at accountability by claiming that its defense is secret – without even showing its evidence to a court behind closed doors.

Children May Be Losing the Equivalent of One Night’s Sleep a Week From Social Media Use, Study Suggests

Children under 12 may be losing the equivalent of one night’s sleep every week due to excessive social media use, a new study suggests. Insider reports:
Almost 70% of the 60 children under 12 surveyed by De Montfort University in Leicester, UK, said they used social media for four hours a day or more. Two thirds said they used social media apps in the two hours before going to bed. The study also found that 12.5% of the children surveyed were waking up in the night to check their notifications.

Psychology lecturer John Shaw, who headed up the study, said children were supposed to sleep for between nine to 11 hours a night, per NHS guidelines, but those surveyed reported sleeping an average of 8.7 hours nightly. He said: “The fear of missing out, which is driven by social media, is directly affecting their sleep. They want to know what their friends are doing, and if you’re not online when something is happening, it means you’re not taking part in it. “And it can be a feedback loop. If you are anxious you are more likely to be on social media, you are more anxious as a result of that. And you’re looking at something, that’s stimulating and delaying sleep.”
“TikTok had the most engagement from the children, with 90% of those surveyed saying they used the app,” notes Insider. “Snapchat was used by 84%, while just over half those surveyed said they used Instagram.”

Sleepless Nights Make People More Selfish and Asocial, Study Finds

A study found losing just one hour of rest could kill people’s desire to help others, even relatives and close friends. The team noted that a bad night appeared to dampen activity in the part of the brain that encouraged social behavior. “We discovered that sleep loss acts as a trigger of asocial behavior, reducing the innate desire of humans to help one another,” said Prof Matthew Walker, co-author of the study at the University of California, Berkeley. “In a way, the less sleep you get, the less social and more selfish you become.” Writing in the PLoS Biology journal, the team suggest that a chronic sleep deficit could harm social bonds and compromise the altruistic instincts that shape society. “Considering the essentiality of humans helping in maintaining cooperative, civilized societies, together with the robust erosion of sleep time over the last 50 years, the ramifications of these discoveries are highly relevant to how we shape the societies we wish to live in,” said Walker.

The team examined the willingness of 160 participants to help others with a “self-reported altruism questionnaire”, which they completed after a night’s sleep. Participants responded to different social scenarios on a scale from “I would stop to help” to “I would ignore them.” In one experiment involving 24 participants, the researchers compared answers from the same person after a restful night and after 24 hours without sleep. The results revealed a 78% decline in self-reported eagerness to help others when tired. The team then performed brain scans of those participants and found a short night was associated with reduced activity in the social cognitive brain network, a region involved in social behavior. Participants were as reluctant to assist friends and family as strangers, the researchers said. “A lack of sleep impaired the drive to help others regardless of whether they were asked to help strangers or close relatives. That is, sleep loss triggers asocial, anti-helping behavior of a broad and indiscriminate impact,” said Walker.

To determine whether altruism takes a hit in the real world, the team then tracked more than 3m charitable donations in the US before and after clocks were shifted an hour forward to daylight saving time, suggesting a shorter period of sleep. They found a 10% drop in donations after the transition. “Our study adds to a growing body of evidence demonstrating that inadequate sleep not only harms the mental and physical wellbeing of an individual but also compromises the bonds between individuals, and even the altruistic sentiment of an entire nation,” said Walker. Luckily, we can catch up on sleep. Walker said: “The positive note emerging from all our studies is that once sleep is adequate and sufficient the desire to help others is restored. But it’s important to note that it is not only sleep duration that is relevant to helping. We found that the factor that was most relevant was actually sleep quality, above and beyond sleep quantity,” he added.

YouTuber Trains AI On 4Chan’s Most Hateful Board

YouTuber Yannic Kilcher trained an AI language model using three years of content from 4chan’s Politically Incorrect (/pol/) board, a place infamous for its racism and other forms of bigotry. After implementing the model in ten bots, Kilcher set the AI loose on the board — and it unsurprisingly created a wave of hate. In the space of 24 hours, the bots wrote 15,000 posts that frequently included or interacted with racist content. They represented more than 10 percent of posts on /pol/ that day, Kilcher claimed.

Nicknamed GPT-4chan (after OpenAI’s GPT-3), the model learned to not only pick up the words used in /pol/ posts, but an overall tone that Kilcher said blended “offensiveness, nihilism, trolling and deep distrust.” The video creator took care to dodge 4chan’s defenses against proxies and VPNs, and even used a VPN to make it look like the bot posts originated from the Seychelles. The AI made a few mistakes, such as blank posts, but was convincing enough that it took roughly two days for many users to realize something was amiss. Many forum members only noticed one of the bots, according to Kilcher, and the model created enough wariness that people accused each other of being bots days after Kilcher deactivated them.

“It’s a reminder that trained AI is only as good as its source material,” concludes the report.

Spyware Scandals Are Ripping Through Europe

The ripple effects of the scandal are reaching the heart of the European Union. Over the past 13 months, it has been revealed that spyware had targeted opposition leaders, journalists, lawyers and activists in France, Spain, Hungary, Poland and even staff within the European Commission, the EU’s cabinet-style government, between 2019 and 2021. The bloc has already set up an inquiry into its own use of spyware, but even as the 38-person committee works toward producing a report for early 2023, the number of new scandals is quickly mounting up. What sets the scandal in Greece apart is the company behind the spyware that was used. Until then the surveillance software in every EU scandal could be traced back to one company, the notorious NSO Group. Yet the spyware stalking Koukakis’ phone was made by Cytrox, a company founded in the small European nation of North Macedonia and acquired in 2017 by Tal Dilian — an entrepreneur who achieved notoriety for driving a high-tech surveillance van around the island of Cyprus and showing a Forbes journalist how it could hack into passing people’s phones.

In that interview, Dilian said he had acquired Cytrox and absorbed the company into his intelligence company Intellexa, which is now thought to now be based in Greece. The arrival of Cytrox into Europe’s ongoing scandal shows the problem is bigger than just the NSO Group. The bloc has a thriving spyware industry of its own. As the NSO Group struggles with intense scrutiny and being blacklisted by the US, its less well-known European rivals are jostling to take its clients, researchers say. Over the past two months, Cytrox is not the only local company to generate headlines for hacking devices within the bloc. In June, Google discovered the Italian spyware vendor RCS Lab was targeting smartphones in Italy and Kazakhstan. Alberto Nobili, RCS’ managing director, told WIRED that the company condemns the misuse of its products but declined to comment on whether the cases cited by Google were examples of misuse. “RCS personnel are not exposed, nor participate in any activities conducted by the relevant customers,” he says. More recently, in July, spyware made by Austria’s DSIRF was detected by Microsoft hacking into law firms, banks, and consultancies in Austria, the UK, and Panama.

‘Greenwashing’: Tree-Planting Schemes Are Just Creating Tree Cemeteries

Thousands of cylindrical plastic tree guards line the grassland here, so uniform that, from a distance, it looks like a war memorial. This open space at the edge of King’s Lynn, a quiet market town in the east of England, was supposed to be a new carbon sink for Norfolk, offering 6,000 trees to tackle the climate crisis. The problem is that almost all of the trees that the guards were supposed to protect have died.

not only were they planted at the wrong time of year, but that they were planted on species-rich grassland that was already carbon negative, which has now been mostly destroyed by tree planting. Environmentalists also point out that the trees were planted so shallowly into the ground that most were unlikely to ever take root.

Facebook Misinformation Is Bad Enough, The Metaverse Will Be Worse

The Rand Corporation is an American (nonprofit) think tank. And veliath (Slashdot reader #5,435) spotted their recent warning about “a plausible scenario that could soon take place in the metaverse.”
A political candidate is giving a speech to millions of people. While each viewer thinks they are seeing the same version of the candidate, in virtual reality they are actually each seeing a slightly different version. For each and every viewer, the candidate’s face has been subtly modified to resemble the viewer…. The viewers are unaware of any manipulation of the image. Yet they are strongly influenced by it: Each member of the audience is more favorably disposed to the candidate than they would have been without any digital manipulation.

This is not speculation. It has long been known that mimicry can be exploited as a powerful tool for influence. A series of experiments by Stanford researchers has shown that slightly changing the features of an unfamiliar political figure to resemble each voter made people rate politicians more favorably. The experiments took pictures of study participants and real candidates in a mock-up of an election campaign. The pictures of each candidate were modified to resemble each participant. The studies found that even if 40 percent of the participant’s features were blended into the candidate’s face, the participants were entirely unaware the image had been manipulated.

In the metaverse, it’s easy to imagine this type of mimicry at a massive scale.

At the heart of all deception is emotional manipulation. Virtual reality environments, such as Facebook’s (now Meta’s) metaverse, will enable psychological and emotional manipulation of its users at a level unimaginable in today’s media…. We are not even close to being able to defend users against the threats posed by this coming new medium…. In VR, body language and nonverbal signals such as eye gaze, gestures, or facial expressions can be used to communicate intentions and emotions. Unlike verbal language, we often produce and perceive body language subconsciously….

We must not wait until these technologies are fully realized to consider appropriate guardrails for them. We can reap the benefits of the metaverse while minimizing its potential for great harm.

They recommend developing technology that detect the application of this kind of VR manipulation.

“Society did not start paying serious attention to classical social media — meaning Facebook, Twitter, and the like — until things got completely out of hand. Let us not make the same mistake as social media blossoms into the metaverse.”

Ethanol Plants Are Allowed To Pollute More Than Oil Refineries

In 2007, the U.S. Congress mandated the blending of biofuels such as corn-based ethanol into gasoline. One of the top goals: reducing greenhouse gas emissions. But today, the nation’s ethanol plants produce more than double the climate-damaging pollution, per gallon of fuel production capacity, than the nation’s oil refineries, according to a Reuters analysis of federal data. The average ethanol plant chuffed out 1,187 metric tons of carbon emissions per million gallons of fuel capacity in 2020, the latest year data is available. The average oil refinery, by contrast, produced 533 metric tons of carbon.

The ethanol plants’ high emissions result in part from a history of industry-friendly federal regulation that has allowed almost all processors to sidestep the key environmental requirement of the 2007 law, the Renewable Fuel Standard (RFS), according to academics who have studied ethanol pollution and regulatory documents examined by Reuters. The rule requires individual ethanol processors to demonstrate that their fuels result in lower carbon emissions than gasoline. The Environmental Protection Agency (EPA) is charged with writing the regulations to meet the goals set by Congress. For processors, that translates to an EPA requirement that the plants use certain emissions-control processes the agency assumes will result in lower-than-gasoline emissions. But the agency has exempted more than 95% of U.S. ethanol plants from the requirement through a grandfathering provision that excused plants built or under construction before the legislation passed. Today, these plants produce more than 80% of the nation’s ethanol, according to the EPA.

Some of the exempted plants produced much less pollution, including some owned by the same companies producing the highest emissions. The EPA said about a third meet the law’s environmental standard even though they are not required to do so. But as a group, the plants freed from regulation produced 40% more pollution per gallon of fuel capacity, on average, than the plants required to comply, the Reuters analysis found.

Negative-prompt AI-Generated Images of Women Generate Gore and Horror

AI image generators like DALL-E and Midjourney have become an especially buzzy topic lately, and it’s easy to see why. Using machine learning models trained on billions of images, the systems tap into the allure of the black box, creating works that feel both alien and strangely familiar. Naturally, this makes fertile ground for all sorts of AI urban legends, since nobody can really explain how the complex neural networks are ultimately deciding on the images they create. The latest example comes from an AI artist named Supercomposite, who posted disturbing and grotesque generated images of a woman who seems to appear in response to certain queries.

The woman, whom the artist calls “Loab,” was first discovered as a result of a technique called “negative prompt weights,” in which a user tries to get the AI system to generate the opposite of whatever they type into the prompt. To put it simply, different terms can be “weighted” in the dataset to determine how likely they will be to appear in the results. But by assigning the prompt a negative weight, you essentially tell the AI system, “Generate what you think is the opposite of this prompt.” In this case, using a negative-weight prompt on the word “Brando” generated the image of a logo featuring a city skyline and the words “DIGITA PNTICS.” When Supercomposite used the negative weights technique on the words in the logo, Loab appeared. “Since Loab was discovered using negative prompt weights, her gestalt is made from a collection of traits that are equally far away from something,” Supercomposite wrote in a thread on Twitter. “But her combined traits are still a cohesive concept for the AI, and almost all descendent images contain a recognizable Loab.”

The images quickly went viral on social media, leading to all kinds of speculation on what could be causing the unsettling phenomenon. Most disturbingly, Supercomposite claims that generated images derived from the original image of Loab almost universally veer into the realm of horror, graphic violence, and gore. But no matter how many variations were made, the images all seem to feature the same terrifying woman. “Through some kind of emergent statistical accident, something about this woman is adjacent to extremely gory and macabre imagery in the distribution of the AI’s world knowledge,” Supercomposite wrote.

Facebook Button is Disappearing From Websites as Consumers Demand Better Privacy

Other big brands, including Best Buy, Ford Motor, Pottery Barn, Nike, Patagonia, Match and Amazon’s video-streaming service Twitch have removed the ability to sign on with Facebook. It’s a marked departure from just a few years ago, when the Facebook login was plastered all over the internet, often alongside buttons that let you sign in with Google, Twitter or LinkedIn. Jen Felch, Dell’s chief digital and chief information officer, said people stopped using social logins, for reasons that include concerns over security, privacy and data-sharing.

GPS Jammers Are Being Used to Hijack Trucks and Down Drones

The world’s freight-carrying trucks and ships use GPS-based satellite tracking and navigation systems, reports ZDNet. But “Criminals are turning to cheap GPS jamming devices to ransack the cargo on roads and at sea, a problem that’s getting worse….”
Jammers work by overpowering GPS signals by emitting a signal at the same frequency, just a bit more powerful than the original. The typical jammers used for cargo hijackings are able to jam frequencies from up to 5 miles away rendering GPS tracking and security apparatuses, such as those used by trucking syndicates, totally useless. In Mexico, jammers are used in some 85% of cargo truck thefts. Statistics are harder to come by in the United States, but there can be little doubt the devices are prevalent and widely used. Russia is currently availing itself of the technology to jam commercial planes in Ukraine.

As we’ve covered, the proliferating commercial drone sector is also prey to attack…. During a light show in Hong Kong in 2018, a jamming device caused 46 drones to fall out of the sky, raising public awareness of the issue.

Scope creep: Woman Whose Rape Kit DNA Led To Her Arrest

A rape victim whose DNA from her sexual assault case was used by San Francisco police to arrest her in an unrelated property crime on Monday filed a lawsuit against the city. During a search of a San Francisco Police Department crime lab database, the woman’s DNA was tied to a burglary in late 2021. Her DNA had been collected and stored in the system as part of a 2016 domestic violence and sexual assault case, then-District Attorney Chesa Boudin said in February in a shocking revelation that raised privacy concerns. “This is government overreach of the highest order, using the most unique and personal thing we have — our genetic code — without our knowledge to try and connect us to crime,” the woman’s attorney, Adante Pointer, said in a statement.

The revelation prompted a national outcry from advocates, law enforcement, legal experts and lawmakers. Advocates said the practice could affect victims’ willingness to come forward to law enforcement authorities. Federal law already prohibits the inclusion of victims’ DNA in the national Combined DNA Index System. There is no corresponding law in California to prohibit local law enforcement databases from retaining victims’ profiles and searching them years later for entirely different purposes.

Boudin said the report was found among hundreds of pages of evidence against a woman who had been recently charged with a felony property crime. After learning the source of the DNA evidence, Boudin dropped the felony property crime charges against the woman. The police department’s crime lab stopped the practice shortly after receiving a complaint from the district attorney’s office and formally changed its operating procedure to prevent the misuse of DNA collected from sexual assault victims, Police Chief Bill Scott said. Scott said at a police commission meeting in March that he had discovered 17 crime victim profiles, 11 of them from rape kits, that were matched as potential suspects using a crime victims database during unrelated investigations. Scott said he believes the only person arrested was the woman who filed the lawsuit Monday.

World Heading Into ‘Uncharted Territory of Destruction,’ Says Climate Report

Despite intensifying warnings in recent years, governments and businesses have not been changing fast enough, according to the United in Science report published on Tuesday. The consequences are already being seen in increasingly extreme weather around the world, and we are in danger of provoking “tipping points” in the climate system that will mean more rapid and in some cases irreversible shifts.

Recent flooding in Pakistan, which the country’s climate minister claimed had covered a third of the country in water, is the latest example of extreme weather that is devastating swathes of the globe. The heatwave across Europe including the UK this summer, prolonged drought in China, a megadrought in the US and near-famine conditions in parts of Africa also reflect increasingly prevalent extremes of weather. The secretary general of the United Nations, Antonio Guterres, said: “There is nothing natural about the new scale of these disasters. They are the price of humanity’s fossil fuel addiction. This year’s United in Science report shows climate impacts heading into uncharted territory of destruction.”

Plastic Might Be Making You Obese

The global obesity epidemic is getting worse, especially among children, with rates of obesity rising over the past decade and shifting to earlier ages. In the US, roughly 40% of today’s high school students were overweight by the time they started high school. Globally, the incidence of obesity has tripled since the 1970s, with fully one billion people expected to be obese by 2030. The consequences are grave, as obesity correlates closely with high blood pressure, diabetes, heart disease and other serious health problems. Despite the magnitude of the problem, there is still no consensus on the cause, although scientists do recognize many contributing factors, including genetics, stress, viruses and changes in sleeping habits. Of course, the popularity of heavily processed foods — high in sugar, salt and fat — has also played a role, especially in Western nations, where people on average consume more calories per day now than 50 years ago. Even so, recent reviews of the science conclude that much of the huge rise in obesity globally over the past four decades remains unexplained.

An emerging view among scientists is that one major overlooked component in obesity is almost certainly our environment — in particular, the pervasive presence within it of chemicals which, even at very low doses, act to disturb the normal functioning of human metabolism, upsetting the body’s ability to regulate its intake and expenditure of energy. Some of these chemicals, known as “obesogens,” directly boost the production of specific cell types and fatty tissues associated with obesity. Unfortunately, these chemicals are used in many of the most basic products of modern life including plastic packaging, clothes and furniture, cosmetics, food additives, herbicides and pesticides. Ten years ago the idea of chemically induced obesity was something of a fringe hypothesis, but not anymore.

This law makes it illegal for companies to collect third-party data to profile you. But they do anyway.

When you purchase a product or service from a company, fill out an online form, or sign up for a newsletter, you might provide only the necessary data such as your name, email, delivery address and/or payment information.

That company may then turn to other retailers or data brokers to purchase or exchange extra data about you. This could include your age, family, health, habits and more.

This allows them to build a more detailed individual profile on you, which helps them predict your behaviour and more precisely target you with ads.

For almost ten years, there has been a law in Australia that makes this kind of data enrichment illegal if a company can “reasonably and practicably” request that information directly from the consumer. And at least one major data broker has asked the government to “remove” this law.

The burning question is: why is there not a single published case of this law being enforced against companies “enriching” customer data for profiling and targeting purposes?

Facebook Engineers: We Have No Idea Where We Keep All Your Personal Data

In March, two veteran Facebook engineers found themselves grilled about the company’s sprawling data collection operations in a hearing for the ongoing lawsuit over the mishandling of private user information stemming from the Cambridge Analytica scandal.

The hearing, a transcript of which was recently unsealed, was aimed at resolving one crucial issue: What information, precisely, does Facebook store about us, and where is it? The engineers’ response will come as little relief to those concerned with the company’s stewardship of billions of digitized lives: They don’t know.

The dispute over where Facebook stores data arose when, as part of the litigation, now in its fourth year, the court ordered Facebook to turn over information it had collected about the suit’s plaintiffs. The company complied but provided data consisting mostly of material that any user could obtain through the company’s publicly accessible “Download Your Information” tool.

Facebook contended that any data not included in this set was outside the scope of the lawsuit, ignoring the vast quantities of information the company generates through inferences, outside partnerships, and other nonpublic analysis of our habits — parts of the social media site’s inner workings that are obscure to consumers. Briefly, what we think of as “Facebook” is in fact a composite of specialized programs that work together when we upload videos, share photos, or get targeted with advertising. The social network wanted to keep data storage in those nonconsumer parts of Facebook out of court.

In 2020, the judge disagreed with the company’s contention, ruling that Facebook’s initial disclosure had indeed been too sparse and that the company must reveal data obtained through its oceanic ability to surveil people across the internet and make monetizable predictions about their next moves.

Facebook’s stonewalling has been revealing on its own, providing variations on the same theme: It has amassed so much data on so many billions of people and organized it so confusingly that full transparency is impossible on a technical level.

Overrun by Influencers, Historic Sites Are Banning TikTok Creators in Nepal

They come in hordes, strike funny poses, dance to loud music, trample over crops, and often stir up unmanageable crowds that cause traffic jams. TikTok creators in Nepal have earned a reputation for disrespecting religious and historic places in their quest to create viral videos, and are now facing a backlash. Over the last two years, several prominent tourist and religious sites in Nepal have erected “No TikTok” signs to keep creators from shooting at the premises.

These sites include the Buddhist pilgrimage site Lumbini, Kathmandu’s famous Boudhanath Stupa, Ram Janaki Temple in Janakpur, and Gadhimai temple in Bara, among others. According to authorities, officials keep a close eye at these places and rule-breakers are warned or asked to leave. “Making TikTok by playing loud music creates a nuisance for pilgrims from all over the world who come to the birthplace of Gautama Buddha,” Sanuraj Shakya, a spokesperson for the Lumbini Development Trust, which manages the shrines in Lumbini, told Rest of World. “We have banned TikTok-making in and around the sacred garden, where the main temples are located.”

Too Many Servers Could Mean No New Homes In Parts of the UK

Data centers have caused skyrocketing power demand in parts of London. Now, new housing construction could be banned for more than a decade in some neighborhoods of the UK’s biggest city because the electricity grid is reaching capacity, as first reported on by the Financial Times. The reason: too many data centers are taking up too much electricity and hogging available fiber optic cables. The Financial Times obtained multiple letters sent from the city’s government, the Greater London Authority (GLA), to developers. “Major new applicants to the distribution network… including housing developments, commercial premises and industrial activities will have to wait several years to receive new electricity connections,” said one note, according to the news outlet.

The GLA also confirmed the grid issue to Gizmodo in an email, and sent along text from one of the letters, which noted that for some areas utilities are saying “electricity connections will not be available for their sites until 2027 to 2030.” Though the Financial Times reported that at least one letter indicated making the necessary electric grid updates in London could take up until 2035. […] “Data centres use large quantities of electricity, the equivalent of towns or small cities, to power servers and ensure resilience in service,” one of the GLA letters seen by the Financial Times reportedly said. […] Developers are “still getting their heads round this, but our basic understanding is that developments of 25 units or more will be affected. Our understanding is that you just can’t build them,” said David O’Leary, policy director at the Home Builders Federation, a trade body. Combined, those sections of London contain about 5,000 homes and make up about 11% of the city’s housing supply, according the Financial Times.

Police Across US Bypass Warrants With Mass Location-Tracking Tool

As summer winds down, researchers warned this week about systemic vulnerabilities in mobile app infrastructure, as well as a new iOS security flaw and one in TikTok. And new findings about ways to exploit Microsoft’s Power Automate tool in Windows 11 show how it can be used to distribute malware, from ransomware to keyloggers and beyond.

Fog Reveal Tool Gives Law Enforcement Cheap Access to US Location-Tracking Data From Smartphones

The data broker Fog Data Science has been selling access to what it claims are billions of location data points from over 250 million smartphones to local, state, and federal law enforcement agencies around the US. The data comes from tech companies and cell phone towers and is collected in the Fog Reveal tool from thousands of iOS and Android apps. Crucially, access to the service is cheap, often costing local police departments less than $10,000 per year, and investigations by the Associated Press and Electronic Frontier Foundation found that law enforcement sometimes pulls location data without a warrant. The EFF conducted its investigation through more than 100 public records requests filed over several months. “Troublingly, those records show that Fog and some law enforcement did not believe Fog’s surveillance implicated people’s Fourth Amendment rights and required authorities to get a warrant,” the EFF wrote.

Inside the biggest human surveillance experiment on the planet

It was in this techno-authoritarian wave that a facial recognition mania costing tens of billions of dollars began. Government policies with sci-fi names like SkyNet and Sharp Eyes laid out ambitious plans to blanket the country with cameras linked to police stations that shared data across the country. The vision was clear: just like on the internet, anonymity could be erased in real life. With accurate facial recognition, police could identify, categorise and follow a single person among 1.4 billion Chinese citizens.