Police using Google Images + Facial Recognition

“The New York Police Department used a photo of Woody Harrelson in its facial recognition program in an attempt to identify a beer thief who looked like the actor,” reports the Associated Press:

Georgetown University’s Center on Privacy and Technology highlighted the April 2017 episode in “Garbage In, Garbage Out,” a report on what it says are flawed practices in law enforcement’s use of facial recognition. The report says security footage of the thief was too pixelated and produced no matches while high-quality images of Harrelson, a three-time Oscar nominee, returned several possible matches and led to one arrest.

The NYPD also used a photo of a New York Knicks player to search its database for a man wanted for a Brooklyn assault, the report said.

“The stakes are too high in criminal investigations to rely on unreliable â” or wrong â” inputs,” Georgetown researcher Clare Garvie wrote…. The Georgetown report says facial recognition has helped the NYPD crack about 2,900 cases in more than five years of using the technology.

And in Florida, Vice reports, law enforcement agencies “run roughly 8,000 of these searches per month.”

The Feds Are Dropping Child Porn Cases Instead of Revealing Their Surveillance Systems

The Department of Justice has been dismissing child pornography cases in order to not reveal information about the software programs used as the basis for the charges. An array of cases suggest serious problems with the tech tools used by federal authorities. But the private entities who developed these tools won’t submit them for independent inspection or hand over hardly any information about how they work, their error rates, or other critical information. As a result, potentially innocent people are being smeared as pedophiles and prosecuted as child porn collectors, while potentially guilty people are going free so these companies can protect “trade secrets.” The situation suggests some of the many problems that can arise around public-private partnerships in catching criminals and the secretive digital surveillance software that it entails (software that’s being employed for far more than catching child predators).

With the child pornography cases, “the defendants are hardly the most sympathetic,” notes Tim Cushing at Techdirt. Yet that’s all the more reason why the government’s antics here are disturbing. Either the feds initially brought bad cases against people whom they just didn’t think would fight back, or they’re willing to let bad behavior go rather than face some public scrutiny. An extensive investigation by ProPublica “found more than a dozen cases since 2011 that were dismissed either because of challenges to the software’s findings, or the refusal by the government or the maker to share the computer programs with defense attorneys, or both,” writes Jack Gillum. Many more cases raised issues with the software as a defense. “Defense attorneys have long complained that the government’s secrecy claims may hamstring suspects seeking to prove that the software wrongly identified them,” notes Gillum. “But the growing success of their counterattack is also raising concerns that, by questioning the software used by investigators, some who trade in child pornography can avoid punishment.”

Microsoft Turned Down Facial-Recognition Sales over “Human Rights Concerns”

Microsoft recently rejected a California law enforcement agency’s request to install facial recognition technology in officers’ cars and body cameras due to human rights concerns, company President Brad Smith said on Tuesday. Microsoft concluded it would lead to innocent women and minorities being disproportionately held for questioning because the artificial intelligence has been trained on mostly white and male pictures. AI has more cases of mistaken identity with women and minorities, multiple research projects have found.

Smith explained the decisions as part of a commitment to human rights that he said was increasingly critical as rapid technological advances empower governments to conduct blanket surveillance, deploy autonomous weapons and take other steps that might prove impossible to reverse. Smith also said at a Stanford University conference that Microsoft had declined a deal to install facial recognition on cameras blanketing the capital city of an unnamed country that the nonprofit Freedom House had deemed not free. Smith said it would have suppressed freedom of assembly there.

On the other hand, Microsoft did agree to provide the technology to an American prison, after the company concluded that the environment would be limited and that it would improve safety inside the unnamed institution.

FamilyTreeDNA Deputizes Itself, Starts Pitching DNA Matching Services To Law Enforcement

One DNA-matching company has decided it’s going to corner an under-served market: US law enforcement. FamilyTreeDNA — last seen here opening up its database to the FBI without informing its users first — is actively pitching its services to law enforcement.

FamilyTreeDNA sounds like it’s finally going to seek consent from its customers, but only after having abused their trust once and under the assumption they’re all going to play ball. While some DNA companies like 23andMe are insisting on at least a subpoena before handing over access to DNA database search results, other companies are staying quiet about law enforcement access or specifically targeting law enforcement agencies with ads promising to help them work through their cold case files.

Consent is great, but it’s never going to be complete consent, no matter how FamilyTreeDNA shapes the argument. As Elizabeth Joh points out at Slate, there’s a whole lot of people involved who will never be asked for their consent once a customer agrees to allow DNA-matching sites to hand over their samples to law enforcement.

[W]hen you volunteer your DNA sample, you’re volunteering your genetic family tree, without having asked your parents, siblings, cousins, and distant cousins if they agree. That upends the usual way we think about providing information to law enforcement. You can’t give the police lawful consent to search your third cousin’s house, even if your third cousin (who you may never have met) is suspected of having been involved in a serious crime. Why are we allowing a distant relative to grant police permission to your DNA?

There’s no informed consent happening here. Customers are being treated as data points law enforcement can peruse at its leisure. A customer who agrees to be a good citizen (by clicking OK on a submission box on a private company’s website) may learn later their sample was used to lock up a close relative. Some people will be fine with this outcome. Others may regret being the critical piece of evidence used to incarcerate one of their relatives.

Whatever the case is, very few companies are being upfront about the effects of opening up database access to law enforcement. FamilyTreeDNA is using a crime victim’s parent and the founder’s Team Blue sympathies to hustle its customers towards compliance. Users who don’t like this turn of events will likely find it far more difficult to remove their DNA from FamilyTreeDNA’s database than simply hold their nose and become an willing part of this partnership.

Facebook Should Notify Users Who Interact With Fake Police ‘Sock Puppet’ Accounts

Despite Facebook’s repeated warnings that law enforcement is required to use “authentic identities” on the social media platform, cops continue to create fake and impersonator accounts to secretly spy on users. By pretending to be someone else, cops are able to sneak past the privacy walls users put up and bypass legal requirements that might require a warrant to obtain that same information.

EFF is now calling on Facebook to escalate the matter with law enforcement in the United States. Facebook should take the following actions to address the proliferation of fake/impersonator Facebook accounts operated by law enforcement, in addition to suspending the fake accounts. As part of its regular transparency reports, Facebook should publish data on the number of fake/impersonator law enforcement accounts identified, what agencies they belonged to, and what action was taken. When a fake/impersonator account is identified, Facebook should alert the users and groups that interacted with the account whether directly or indirectly.

The article also suggests updating Facebook’s Terms of Service to explicitly prohibit fake/impersonator profiles by law enforcement groups, and updating Facebook pages of law enforcement groups to inform visitors when those groups have a written policy allowing fake/impersonator law enforcement accounts. “These four changes are relatively light lifts that would enhance transparency and establish real consequences for agencies that deliberately violate the rules…”

“Facebook’s practice of taking down these individual accounts when they learn about them from the press (or from EFF) is insufficient to deter what we believe is a much larger iceberg beneath the surface.”

Police Are Using Google’s Location Data From ‘Hundreds of Millions’ of Phones

Police have used information from the search giant’s Sensorvault database to aid in criminal cases across the country, according to a report Saturday by The New York Times. The database has detailed location records from hundreds of millions of phones around the world, the report said. It’s meant to collect information on the users of Google’s products so the company can better target them with ads, and see how effective those ads are. But police have been tapping into the database to help find missing pieces in investigations.

Law enforcement can get “geofence” warrants seeking location data. Those kinds of requests have spiked in the last six months, and the company has received as many as 180 requests in one week, according to the report…. For geofence warrants, police carve out a specific area and time period, and Google can gather information from Sensorvault about the devices that were present during that window, according to the report. The information is anonymous, but police can analyze it and narrow it down to a few devices they think might be relevant to the investigation. Then Google reveals those users’ names and other data, according to the Times…

[T]he AP reported last year that Google tracked people’s location even after they’d turned off location-sharing on their phones.

Google’s data dates back “nearly a decade,” the Times reports — though in a statement, Google’s director of law enforcement and information security insisted “We vigorously protect the privacy of our users while supporting the important work of law enforcement.” (The Times also interviewed a man who was arrested and jailed for a week last year based partly on Google’s data — before eventually being released after the police found a more likely suspect.)

Can Police control your self-driving car?

In 2009 GM equipped 17,000 of its units with “remote ignition block,” a kill switch that can turn off the engine if the car is stolen. But that was just the beginning.

Imagine this: You’re leaving work, walking to your car, and you find an empty parking spot — someone stole your brand new Tesla (or whatever fancy autonomous car you’re driving). When you call the police, they ask your permission for a “takeover,” which you promptly give them. Next thing you know, your car is driving itself to the nearest police station. And here’s the kicker — if the thief is inside he will remain locked inside until police can arrest them.

This futuristic and almost slapstick scenario is closer than we think, says Chief Innovation Officer Hans Schönfeld who works for the Dutch police. Currently, his team has already done several experiments to test the crime-halting possibilities of autonomous cars. “We wanted to know if we can make them stop or drive them to certain locations,” Schönfeld tells me. “And the result is: yes, we probably can.”

The Dutch police tested Tesla, Audi, Mercedes, and Toyota vehicles, he reports, adding “We do this in collaboration with these car companies because this information is valuable to them, too.

“If we can hack into their cars, others can as well.”

French Officer Caught Selling Access To State Surveillance Systems

A French police officer has been charged and arrested last week for selling confidential data on the dark web in exchange for Bitcoin,” reports ZDNet. French authorities caught him after they took down the “Black Hand” dark web marketplace. Sifting through the marketplace data, they found French police documents sold on the site. All the documents had unique identifiers, which they used to track down the French police officer who was selling the data under the name of Haurus.

Besides selling access to official docs, they also found he ran a service to track the location of mobile devices based on a supplied phone number. He advertised the system as a way to track spouses or members of competing criminal gangs. Investigators believe Haurus was using the French police resources designed with the intention to track criminals for this service. He also advertised a service that told buyers if they were tracked by French police and what information officers had on them.

Amazon worker demands company stop selling facial recognition tech to law enforcement

An Amazon employee is seeking to put new pressure on the company to stop selling its facial recognition technology to law enforcement. An anonymous worker, whose employment at Amazon was verified by Medium, published an op-ed on that platform on Tuesday criticizing the company’s facial recognition work and urging the company to respond to an open letter delivered by a group of employees. The employee wrote that the government has used surveillance tools in a way that disproportionately hurts “communities of color, immigrants, and people exercising their First Amendment rights.”

“Ignoring these urgent concerns while deploying powerful technologies to government and law enforcement agencies is dangerous and irresponsible,” the person wrote. “That’s why we were disappointed when Teresa Carlson, vice president of the worldwide public sector of Amazon Web Services, recently said that Amazon ‘unwaveringly supports’ law enforcement, defense, and intelligence customers, even if we don’t ‘know everything they’re actually utilizing the tool for.'” The op-ed comes one day after Amazon CEO Jeff Bezos defended technology companies working with the federal government on matters of defense during Wired’s ongoing summit in San Francisco. “If big tech companies are going to turn their back on the U.S. Department of Defense, this country is going to be in trouble,” Bezos said on Monday.

Facebook Is Teeming With Fake Accounts Created By Undercover Cops

In the summer of 2015, as Memphis exploded with protests over the police killing of a 19-year-old man, activists began hearing on Facebook from someone called Bob Smith. The name was generic, and so was his profile picture: a Guy Fawkes mask, the symbol of anti-government dissent. Smith acted as if he supported the protesters, and, slowly, they let him into their online community. Over the next three years, dozens of them accepted his friend requests, allowing him to observe private discussions over marches, rallies and demonstrations.

But Smith was not real. He was the creation of a white detective in the Memphis Police Department’s Office of Homeland Security whose job was to keep tabs on local activists across the spectrum, from Black Lives Matter to Confederate sympathizers.

The detective, Tim Reynolds, outed himself in August under questioning by the American Civil Liberties Union of Tennessee, which sued the police department for allegedly violating a 1978 agreement that prohibited police from conducting surveillance of lawful protests. The revelation validated many activists’ distrust of local authorities. It also provided a rare look into the ways American law enforcement operates online, taking advantage of a loosely regulated social media landscape — and citizens’ casual relinquishing of their privacy — to expand monitoring of the public.

The proliferation of fake Facebook accounts and other means of social media monitoring ─ including the use of software to crunch data about people’s online activity ─ illustrates a policing “revolution” that has allowed authorities to not only track people but also map out their networks, said Rachel Levinson-Waldman, senior counsel at New York University School of Law’s Brennan Center for Justice.

She is among many scholars who worry that expanded social media surveillance could make people less likely to engage in online activities protected by the First Amendment, from sharing their opinions to organizing protests of the government. But there are few laws governing this kind of monitoring. Few courts have taken up the issue. And most police departments don’t have policies on how officers can use social media for investigations, according to Levinson-Waldman’s research.

“It’s pretty open territory,” she said.

Police Bodycams Can Be Hacked To Doctor Footage, Install Malware

Josh Mitchell’s Defcon presentation analyzes the security of five popular brands of police bodycams (Vievu, Patrol Eyes, Fire Cam, Digital Ally, and CeeSc) and reveals that they are universally terrible. All the devices use predictable network addresses that can be used to remotely sense and identify the cameras when they switch on. None of the devices use code-signing. Some of the devices can form ad-hoc Wi-Fi networks to bridge in other devices, but they don’t authenticate these sign-ons, so you can just connect with a laptop and start raiding the network for accessible filesystems and gank or alter videos, or just drop malware on them.

UK Police Plan To Deploy ‘Staggeringly Inaccurate’ Facial Recognition in London

Millions of people face the prospect of being scanned by police facial recognition technology that has sparked human rights concerns. The controversial software, which officers use to identify suspects, has been found to be “staggeringly inaccurate”, while campaigners have branded its use a violation of privacy. But Britain’s largest police force is set to expand a trial across six locations in London over the coming months.

Police leaders claimed officers make the decision to act on potential matches with police records and images that do not spark an alert are immediately deleted. But last month The Independent revealed the Metropolitan Police’s software was returning “false positives” — images of people who were not on a police database — in 98 percent of alerts… Detective Superintendent Bernie Galopin, the lead on facial recognition for London’s Metropolitan Police, said the operation was targeting wanted suspects to help reduce violent crime and make the area safer. “It allows us to deal with persons that are wanted by police where traditional methods may have failed,” he told The Independent, after statistics showed police were failing to solve 63 per cent of knife crimes committed against under-25s….

Det Supt Galopin said the Met was assessing how effective facial recognition was at tackling different challenges in British policing, which is currently being stretched by budget cuts, falling officer numbers, rising demand and the terror threat.

A policy officer from the National Council for Civil Liberties called the technology “lawless,” adding “the use of this technology in a public place is not compatible with privacy, and has a chilling effect on society.”

But a Home Office minister said the technology was vital for protecting people from terrorism, though “we must ensure that privacy is respected. This strategy makes clear that we will grasp the opportunities that technology brings while remaining committed to strengthening safeguards.”

12 Days In Xinjiang — China’s Surveillance State

Urumqi, China – This city on China’s Central Asia frontier may be one of the most closely surveilled places on earth.

Security checkpoints with identification scanners guard the train station and roads in and out of town. Facial scanners track comings and goings at hotels, shopping malls and banks. Police use hand-held devices to search smartphones for encrypted chat apps, politically charged videos and other suspect content. To fill up with gas, drivers must first swipe their ID cards and stare into a camera.

China’s efforts to snuff out a violent separatist movement by some members of the predominantly Muslim Uighur ethnic group have turned the autonomous region of Xinjiang, of which Urumqi is the capital, into a laboratory for high-tech social controls that civil-liberties activists say the government wants to roll out across the country.

It is nearly impossible to move about the region without feeling the unrelenting gaze of the government. Citizens and visitors alike must run a daily gantlet of police checkpoints, surveillance cameras and machines scanning their ID cards, faces, eyeballs and sometimes entire bodies.

When fruit vendor Parhat Imin swiped his card at a telecommunications office this summer to pay an overdue phone bill, his photo popped up with an “X.” Since then, he says, every scan of his ID card sets off an alarm. He isn’t sure what it signifies, but figures he is on some kind of government watch list because he is a Uighur and has had intermittent run-ins with the police.

He says he is reluctant to travel for fear of being detained. “They blacklisted me,” he says. “I can’t go anywhere.”

All across China, authorities are rolling out new technology to keep watch over people and shape their behavior. Controls on expression have tightened under President Xi Jinping, and the state’s vast security web now includes high-tech equipment to monitor online activity and even snoop in smartphone messaging apps.

China’s government has been on high alert since a surge in deadly terrorist attacks around the country in 2014 that authorities blamed on Xinjiang-based militants inspired by extremist Islamic messages from abroad. Now officials are putting the world’s most state-of-the-art tools in the hands of a ramped-up security force to create a system of social control in Xinjiang—one that falls heaviest on Uighurs.

At a security exposition in October, an executive of Guangzhou-based CloudWalk Technology Co., which has sold facial-recognition algorithms to police and identity-verification systems to gas stations in Xinjiang, called the region the world’s most heavily guarded place. According to the executive, Jiang Jun, for every 100,000 people the police in Xinjiang want to monitor, they use the same amount of surveillance equipment that police in other parts of China would use to monitor millions.

Authorities in Xinjiang declined to respond to questions about surveillance. Top party officials from Xinjiang said at a Communist Party gathering in Beijing in October that “social stability and long-term security” were the local government’s bottom-line goals.

Chinese and foreign civil-liberty activists say the surveillance in this northwestern corner of China offers a preview of what is to come nationwide.

“They constantly take lessons from the high-pressure rule they apply in Xinjiang and implement them in the east,” says Zhu Shengwu, a Chinese human-rights lawyer who has worked on surveillance cases. “What happens in Xinjiang has bearing on the fate of all Chinese people.”

During an October road trip into Xinjiang along a modern highway, two Wall Street Journal reporters encountered a succession of checkpoints that turned the ride into a strange and tense journey.

At Xingxing Gorge, a windswept pass used centuries ago by merchants plying the Silk Road, police inspected incoming traffic and verified travelers’ identities. The Journal reporters were stopped, ordered out of their car and asked to explain the purpose of their visit. Drivers, mostly those who weren’t Han Chinese, were guided through electronic gateways that scanned their ID cards and faces.

Farther along, at the entrance to Hami, a city of a half-million, police had the Journal reporters wait in front of a bank of TV screens showing feeds from nearby surveillance cameras while recording their passport numbers.

Surveillance cameras loomed every few hundred feet along the road into town, blanketed street corners and kept watch on patrons of a small noodle shop near the main mosque. The proprietress, a member of the Muslim Hui minority, said the government ordered all restaurants in the area to install the devices earlier this year “to prevent terrorist attacks.”

Days later, as the Journal reporters were driving on a dirt road in Shanshan county after being ordered by officials to leave a nearby town, a police cruiser materialized seemingly from nowhere. It raced past, then skidded to a diagonal stop, kicking up a cloud of dust and blocking the reporters’ car. An SUV pulled up behind. A half-dozen police ordered the reporters out of the car and demanded their passports.

An officer explained that surveillance cameras had read the out-of-town license plates and sent out an alert. “We check every car that’s not from Xinjiang,” he said. The police then escorted the reporters to the highway.

At checkpoints further west, iris and body scanners are added to the security arsenal.

Darren Byler, an anthropology researcher at the University of Washington who spent two years in Xinjiang studying migration, says the closest contemporary parallel can be found in the West Bank and Gaza Strip, where the Israeli government has created a system of checkpoints and biometric surveillance to keep tabs on Palestinians.

In Erdaoqiao, the neighborhood where the fruit vendor Mr. Imin lives, small booths known as “convenience police stations,” marked by flashing lights atop a pole, appear every couple of hundred yards. The police stationed there offer water, cellphone charging and other services, while also taking in feeds from nearby surveillance cameras.

Young Uighur men are routinely pulled into the stations for phone checks, leading some to keep two devices—one for home use and another, with no sensitive content or apps, for going out, according to Uighur exiles.

Erdaoqiao, the heart of Uighur culture and commerce in Urumqi, is where ethnic riots started in 2009 that resulted in numerous deaths. The front entrance to Erdaoqiao Mosque is now closed, as are most entries to the International Grand Bazaar. Visitors funnel through a heavily guarded main gate. The faces and ID cards of Xinjiang residents are scanned. An array of cameras keeps watch.

After the riots, authorities showed up to shut down the shop Mr. Imin was running at the time, which sold clothing and religious items. When he protested, he says, they clubbed him on the back of the head, which has left him walking with a limp. They jailed him for six months for obstructing official business, he says. Other jail stints followed, including eight months for buying hashish.

The police in Urumqi didn’t respond to requests for comment.

Mr. Imin now sells fruit and freshly squeezed pomegranate juice from a cart. He worries that his flagged ID card will bring the police again. Recently remarried, he hasn’t dared visit his new wife’s family in southern Xinjiang.

Chinese rulers have struggled for two millennia to control Xinjiang, whose 23 million people are scattered over an expanse twice the size of Texas. Beijing sees it as a vital piece of President Xi’s trillion-dollar “Belt and Road” initiative to build infrastructure along the old Silk Road trade routes to Europe.

Last year, Mr. Xi installed a new Xinjiang party chief, Chen Quanguo, who previously handled ethnic strife in Tibet, another hot spot. Mr. Chen pioneered the convenience police stations in that region, partly in response to a string of self-immolations by monks protesting Chinese rule.

Under Mr. Chen, the police presence in Xinjiang has skyrocketed, based on data showing exponential increases in police-recruitment advertising. Local police departments last year began ordering cameras capable of creating three-dimensional face images as well as DNA sequencers and voice-pattern analysis systems, according to government procurement documents uncovered by Human Rights Watch and reviewed by the Journal.

During the first quarter of 2017, the government announced the equivalent of more than $1 billion in security-related investment projects in Xinjiang, up from $27 million in all of 2015, according to research in April by Chinese brokerage firm Industrial Securities .

Government procurement orders show millions spent on “unified combat platforms”—computer systems to analyze surveillance data from police and other government agencies.

Tahir Hamut, a Uighur poet and filmmaker, says Uighurs who had passports were called in to local police stations in May. He worried he would draw extra scrutiny for having been accused of carrying sensitive documents, including newspaper articles about Uighur separatist attacks, while trying to travel to Turkey to study in the mid-1990s. The aborted trip landed him in a labor camp for three years, he says.

He and his wife lined up at a police station with other Uighurs to have their fingerprints and blood samples taken. He says he was asked to read a newspaper for two minutes while police recorded his voice, and to turn his head slowly in front of a camera.

Later, his family’s passports were confiscated. After a friend was detained by police, he says, he assumed he also would be taken away. He says he paid officials a bribe of more than $9,000 to get the passports back, making up a story that his daughter had epilepsy requiring treatment in the U.S. Xinjiang’s Public Security Bureau, which is in charge of the region’s police forces, didn’t respond to a request for comment about the bribery.

“The day we left, I was filled with anxiety,” he says. “I worried what would happen if we were stopped going through security at the Urumqi airport, or going through border control in Beijing.”

He and his family made it to Virginia, where they have applied for political asylum.

Chinese authorities use forms to collect personal information from Uighurs. One form reviewed by the Journal asks about respondents’ prayer habits and if they have contacts abroad. There are sections for officials to rate “persons of interest” on a six-point scale and check boxes on whether they are “safe,” “average” or “unsafe.”

China Communications Services Co. Ltd., a subsidiary of state telecom giant China Telecom , has signed contracts this year worth more than $38 million to provide mosque surveillance and install surveillance-data platforms in Xinjiang, according to government procurement documents. The company declined to discuss the contracts, saying they constituted sensitive business information.

Xiamen Meiya Pico Information Co. Ltd. worked with police in Urumqi to adapt a hand-held device it sells for investigating economic crimes so it can scan smartphones for terrorism-related content.

A description of the device that recently was removed from the company’s website said it can read the files on 90% of smartphones and check findings against a police antiterror database. “Mostly, you’re looking for audio and video,” said Zhang Xuefeng, Meiya Pico’s chief marketing officer, in an interview.

Near the Xinjiang University campus in Urumqi, police sat at a wooden table recently, ordering some people walking by to hand over their phones.

“You just plug it in and it shows you what’s on the phone,” said one officer, brandishing a device similar to the one on Meiya Pico’s website. He declined to say what content they were checking for.

One recent afternoon in Korla, one of Xinjiang’s largest cities, only a trickle of people passed through the security checkpoint at the local bazaar, where vendors stared at darkened hallways empty of shoppers.

Li Qiang, the Han Chinese owner of a wine shop, said the security checks, while necessary for safety, were getting in the way of commerce. “As soon as you go out, they check your ID,” he said.

Authorities have built a network of detention facilities, officially referred to as education centers, across Xinjiang. In April, the official Xinjiang Daily newspaper said more than 2,000 people had been sent to a “study and training center” in the southern city of Hotan.

One new compound sits a half-hour drive south of Kashgar, a Uighur-dominated city near the border with Kyrgyzstan. It is surrounded by imposing walls topped with razor wire, with watchtowers at two corners. A slogan painted on the wall reads: “All ethnic groups should be like the pods of a pomegranate, tightly wrapped together.”

Villagers describe it as a detention center. A man standing near the entrance one recent night said it was a school and advised reporters to leave.

Mr. Hamut, the poet, says a relative in Kashgar was taken to a detention center after she participated in an Islamic ceremony, and another went missing soon after the family tried to call him from the U.S.

The local government in Kashgar didn’t respond to a request for comment.

Surveillance in and around Kashgar, where Han Chinese make up less than 7% of the population, is even tighter than in Urumqi. Drivers entering the city are screened intensively. A machine scans each driver’s face. Police officers inspect the engine and the trunk. Passengers must get out and run their bags through X-ray machines.

In Aksu, a dusty city a five-hour drive east of Kashgar, knife salesman Jiang Qiankun says his shop had to pay thousands of dollars for a machine that turns a customer’s ID card number, photo, ethnicity and address into a QR code that it lasers into the blade of any knife it sells. “If someone has a knife, it has to have their ID card information,” he says.

On the last day the Journal reporters were in Xinjiang, an unmarked car trailed them on a 5 a.m. drive to the Urumqi airport. During their China Southern Airlines flight to Beijing, a flight attendant appeared to train a police-style body camera attached to his belt on the reporters. Later, as passengers were disembarking, the attendant denied filming them, saying it was common for airline crew to wear the cameras as a security measure.

China Southern says the crew member was an air marshal, charged with safety on board.

The rise of big data policing

An excerpt from the book The Rise of Big Data Policing: Surveillance, Race, and the Future of Law Enforcement (2017):

“Data-driven policing means aggressive police presence, surveillance, and perceived harassment in those communities. Each data point translates to real human experience, and many times those experiences remain fraught with all-too-human bias, fear, distrust, and racial tension. For those communities, especially poor communities of color, these data-collection efforts cast a dark shadow on the future.”

Stare Into The Lights My Pretties

Surveillance tools for “War on Terror” used on indigenous activists

“A shadowy international mercenary and security firm known as TigerSwan targeted the movement opposed to the Dakota Access Pipeline with military-style counterterrorism measures, collaborating closely with police in at least five states, according to internal documents obtained by The Intercept. The documents provide the first detailed picture of how TigerSwan, which originated as a U.S. military and State Department contractor helping to execute the global war on terror, worked at the behest of its client Energy Transfer Partners, the company building the Dakota Access Pipeline, to respond to the indigenous-led movement that sought to stop the project.

TigerSwan spearheaded a multifaceted private security operation characterized by sweeping and invasive surveillance of protesters.

Activists on the ground were tracked by a Dakota Access helicopter that provided live video coverage to their observers in police agencies, according to an October 12 email thread that included officers from the FBI, DHS, BIA, state, and local police. In one email, National Security Intelligence Specialist Terry Van Horn of the U.S. attorney’s office acknowledged his direct access to the helicopter video feed, which was tracking protesters’ movements during a demonstration. “Watching a live feed from DAPL Helicopter, pending arrival at site(s),” he wrote. Cecily Fong, a spokesperson for law enforcement throughout the protests, acknowledged that an operations center in Bismarck had access to the feed, stating in an email to The Intercept that “the video was provided as a courtesy so we had eyes on the situation.”

Robot police “officer” goes on duty in Dubai

Dubai Police have revealed their first robot officer, giving it the task of patrolling the city’s malls and tourist attractions.

People will be able to use it to report crimes, pay fines and get information by tapping a touchscreen on its chest.

Data collected by the robot will also be shared with the transport and traffic authorities.”

US, Innocent people placed on watch-list to meet quota

“You could be on a secret government database or watch list for simply taking a picture on an airplane. Some federal air marshals say they’re reporting your actions to meet a quota, even though some top officials deny it.

The air marshals, whose identities are being concealed, told 7NEWS that they’re required to submit at least one report a month. If they don’t, there’s no raise, no bonus, no awards and no special assignments.

”Innocent passengers are being entered into an international intelligence database as suspicious persons, acting in a suspicious manner on an aircraft … and they did nothing wrong,” said one federal air marshal.”

Police request Echo recordings for investigation

“You have the right to remain silent — but your smart devices might not.

Amazon’s Echo and Echo Dot are in millions of homes now, with holiday sales more than quadrupling from 2015. Always listening for its wake word, the breakthrough smart speakers boast seven microphones waiting to take and record your commands.

Now, Arkansas police are hoping an Echo found at a murder scene in Bentonville can aid their investigation.

First reported by The Information, investigators filed search warrants to Amazon, requesting any recordings between November 21 and November 22, 2015, from James A. Bates, who was charged with murder after a man was strangled in a hot tub.

While investigating, police noticed the Echo in the kitchen and pointed out that the music playing in the home could have been voice activated through the device. While the Echo records only after hearing the wake word, police are hoping that ambient noise or background chatter could have accidentally triggered the device, leading to some more clues.

Amazon stores all the voice recordings on its servers, in the hopes of using the data to improve its voice assistant services. While you can delete your personal voice data, there’s still no way to prevent any recordings from being saved on a server.

[…]

Even without Amazon’s help, police may be able to crack into the Echo, according to the warrant. Officers believe they can tap into the hardware on the smart speakers, which could “potentially include time stamps, audio files or other data.”

The investigation has focused on other smart devices as well. Officers seized Bates’ phone but were unable to break through his password, which only served to delay the investigation.

”Our agency now has the ability to utilize data extraction methods that negate the need for passcodes and efforts to search Victor and Bates’ devices will continue upon issuance of this warrant.”

Police also found a Nest thermostat, a Honeywell alarm system, wireless weather monitoring in the backyard and WeMo devices for lighting at the smart home crime scene.

Ultimately, it might have been information from a smart meter that proved to be the most useful. With every home in Bentonville hooked up to a smart meter that measures hourly electricity and water usage, police looked at the data and noticed Bates used an “excessive amount of water” during the alleged drowning.”

Leaked files reveal scope of Cellebrite’s phone cracking technology

“Earlier this year, [ZDNet was] sent a series of large, encrypted files purportedly belonging to a U.S. police department as a result of a leak at a law firm, which was insecurely synchronizing its backup systems across the internet without a password. Among the files was a series of phone dumps created by the police department with specialist equipment, which was created by Cellebrite, an Israeli firm that provides phone-cracking technology. We obtained a number of these so-called extraction reports. One of the more interesting reports by far was from an iPhone 5 running iOS 8. The phone’s owner didn’t use a passcode, meaning the phone was entirely unencrypted. The phone was plugged into a Cellebrite UFED device, which in this case was a dedicated computer in the police department. The police officer carried out a logical extraction, which downloads what’s in the phone’s memory at the time. (Motherboard has more on how Cellebrite’s extraction process works.) In some cases, it also contained data the user had recently deleted. To our knowledge, there are a few sample reports out there floating on the web, but it’s rare to see a real-world example of how much data can be siphoned off from a fairly modern device. We’re publishing some snippets from the report, with sensitive or identifiable information redacted.”

Chemical traces on your phone reveal your lifestyle, say forensic scientists

“Scientists say they can deduce the lifestyle of an individual, down to the kind of grooming products they use, food they eat and medications they take, from chemicals found on the surface of their mobile phone. Experts say analysis of someone’s phone could be a boon both to healthcare professionals, and the police.

“You can narrow down male versus female; if you then figure out they use sunscreen then you pick out the [people] that tend to be outdoorsy — so all these little clues can sort of narrow down the search space of candidate people for an investigator,” said Pieter Dorrestein, co-author of the research from the University of California, San Diego.

Writing in the Proceedings of the National Academy of Sciences, researchers from the U.S. and Germany describe how they swabbed the mobile phone and right hand of 39 individuals and analyzed the samples using the highly sensitive technique of mass spectrometry.

The results revealed that each person had a distinct “signature” set of chemicals on their hands which distinguished them from each other. What’s more, these chemicals partially overlapped with those on their phones, allowing the devices to be distinguished from each other, and matched to their owners.

Analysis of the chemical traces using a reference database allowed the team to match the chemicals to known substances or their relatives to reveal tell-tale clues from each individual’s life — from whether they use hair-loss treatments to whether they are taking antidepressants.