Resources

NHS Gives Amazon Free Use of Health Data Under Alexa Advice Deal

Amazon has been given free access to healthcare information collected by the NHS as part of a contract with the government. The material, which excludes patient data, could allow the multinational technology company to make, advertise and sell its own products.

In July the health secretary, Matt Hancock, said a partnership with the NHS that allowed Amazon Alexa devices to offer expert health advice to users would reduce pressure on “our hard-working GPs and pharmacists.” But responses to freedom of information requests, published by the Sunday Times, showed the contract will also allow the company access to information on symptoms, causes and definitions of conditions, and “all related copyrightable content and data and other materials.” Amazon, which is worth $863bn and is run by the world’s richest person, Jeff Bezos, can then create “new products, applications, cloud-based services and/or distributed software,” which the NHS would not benefit from financially. It can also share the information with third parties. Labour’s shadow health secretary, Jonathan Ashworth, told the Sunday Times that the government was “highly irresponsible” and “in the pocket of big corporate interests.”

Turkey is Getting Military Drones Armed With Machine Guns

A drone with a machine gun attached can hit targets with high precision, according to its makers. Turkey is set to become the first country to have the drone, when it gets a delivery this month. The 25-kilogram drone has eight rotating blades to get it in the air. Its machine gun carries 200 rounds of ammunition and can fire single shots or 15-round bursts. Many countries and groups already use small military drones that can drop grenades or fly into a target to detonate an explosive. The new drone, called Songar and made by Ankara-based electronics firm Asisguard, is the first drone to be equipped with a firearm and be ready for service. Turkey expects the drones to be delivered before the end of the year.

It is hard for a drone to shoot accurately, partly because of the difficulty of judging range and angle, and partly because the recoil from each shot significantly moves the drone, affecting the aim for the next round. Songar has two systems to overcome these challenges. One uses sensors, including cameras and a laser rangefinder, to calculate distance, angle and wind speed, and work out where to aim. The second is a set of robot arms that move the machine gun to compensate for the effects of recoil.

Emotion Recognition Tech Should Be Banned, Says an AI Research Institute

A leading research centre has called for new laws to restrict the use of emotion-detecting tech. The AI Now Institute says the field is “built on markedly shaky foundations.” Despite this, systems are on sale to help vet job seekers, test criminal suspects for signs of deception, and set insurance prices. It wants such software to be banned from use in important decisions that affect people’s lives and/or determine their access to opportunities. The US-based body has found support in the UK from the founder of a company developing its own emotional-response technologies — but it cautioned that any restrictions would need to be nuanced enough not to hamper all work being done in the area.

AI Now refers to the technology by its formal name, affect recognition, in its annual report. It says the sector is undergoing a period of significant growth and could already be worth as much as $20 billion. “It claims to read, if you will, our inner-emotional states by interpreting the micro-expressions on our face, the tone of our voice or even the way that we walk,” explained co-founder Prof Kate Crawford. “It’s being used everywhere, from how do you hire the perfect employee through to assessing patient pain, through to tracking which students seem to be paying attention in class. “At the same time as these technologies are being rolled out, large numbers of studies are showing that there is… no substantial evidence that people have this consistent relationship between the emotion that you are feeling and the way that your face looks.”

YouTube’s Algorithm Made Fake CNN Reports Go Viral

“YouTube channels posing as American news outlets racked up millions of views on false and inflammatory videos over several months this year,” reports CNN.

“All with the help of YouTube’s recommendation engine.”

Many of the accounts, which mostly used footage from CNN, but also employed some video from Fox News, exploited a YouTube feature that automatically creates channels on certain topics. Those topic channels are then automatically populated by videos related to the topic — including, in this case, blatant misinformation.

YouTube has now shut down many of the accounts.

YouTube’s own algorithms also recommended videos from the channels to American users who watched videos about U.S. politics. That the channels could achieve such virality — one channel was viewed more than two million times over one weekend in October — raises questions about YouTube’s preparedness for tackling misinformation on its platform just weeks before the Iowa caucuses and points to the continuing challenge platforms face as people try to game their systems….

Responding to the findings on Thursday, a CNN spokesperson said YouTube needs to take responsibility.

“When accounts were deleted or banned, they were able to spin up new accounts within hours,” added Plasticity, a natural language processing and AI startup which analyzed the data and identified at least 25 different accounts which YouTube then shut down.

“The tactics they used to game the YouTube algorithm were executed perfectly. They knew what they were doing.”

Chinese Newspaper Touts Videogame Where Players ‘Hunt Down Traitors’ in Hong Kong

An online game calling on players to hunt down traitors who seek to separate Hong Kong from China and fuel street violence has reportedly begun to attract players across Chinese mainland social media platforms. The game, “fight the traitors together,” is set against the backdrop of the social unrest that has persisted in Hong Kong. The script asks the player to find eight secessionists hidden in the crowd participating in Hong Kong protests.

Players can knock them down with slaps or rotten eggs until they are captured. Online gamers claim the game allows them to vent their anger at the separatist behavior of secessionists during the recent Hong Kong riots. The eight traitors in the game, caricatured based on real people, include Jimmy Lai Chee-ying, Martin Lee Chu-ming and Joshua Wong Chi-fung, prominent opposition figures who have played a major role in inciting unrest in Hong Kong. There are also traitor figures in ancient China…

In the game, amid a crowd of black-clad rioters wearing yellow hats and face masks, Anson Chan Fang On-sang, another leading opposition figure, carries a bag with a U.S. flag, clutches a stack of U.S. dollars and holds a loudspeaker to incite violence in the streets.

Cops Around the World Are Using An Outlandish Mind-Reading Tool

ProPublica reports that dozens of state and local agencies have purchased “SCAN” training from a company called LSI for reviewing a suspect’s written statements — even though there’s no scientific evidence that it works.

Local, state and federal agencies from the Louisville Metro Police Department to the Michigan State Police to the U.S. State Department have paid for SCAN training. The LSI website lists 417 agencies nationwide, from small-town police departments to the military, that have been trained in SCAN — and that list isn’t comprehensive, because additional ones show up in procurement databases and in public records obtained by ProPublica. Other training recipients include law enforcement agencies in Australia, Belgium, Canada, Israel, Mexico, the Netherlands, Singapore, South Africa and the United Kingdom, among others…

For Avinoam Sapir, the creator of SCAN, sifting truth from deception is as simple as one, two, three.

1. Give the subject a pen and paper.
2. Ask the subject to write down his/her version of what happened.
3. Analyze the statement and solve the case.

Those steps appear on the website for Sapir’s company, based in Phoenix. “SCAN Unlocks the Mystery!” the homepage says, alongside a logo of a question mark stamped on someone’s brain. The site includes dozens of testimonials with no names attached. “Since January when I first attended your course, everybody I meet just walks up to me and confesses!” one says. Another testimonial says “The Army finally got its money’s worth…” SCAN saves time, the site says. It saves money. Police can fax a questionnaire to a hundred people at once, the site says. Those hundred people can fax it back “and then, in less than an hour, the investigator will be able to review the questionnaires and solve the case.”

In 2009 the U.S. government created a special interagency task force to review scientific studies and independently investigate which interrogation techniques worked, assessed by the FBI, CIA and the U.S. Department of Defense. “When all 12 SCAN criteria were used in a laboratory study, SCAN did not distinguish truth-tellers from liars above the level of chance,” the review said, also challenging two of the method’s 12 criteria. “Both gaps in memory and spontaneous corrections have been shown to be indicators of truth, contrary to what is claimed by SCAN.”
In a footnote, the review identified three specific agencies that use SCAN: the FBI, CIA and U.S. Army military intelligence, which falls under the Department of Defense…

In 2016, the same year the federal task force released its review of interrogation techniques, four scholars published a study on SCAN in the journal Frontiers in Psychology. The authors — three from the Netherlands, one from England — noted that there had been only four prior studies in peer-reviewed journals on SCAN’s effectiveness. Each of those studies (in 1996, 2012, 2014 and 2015) concluded that SCAN failed to help discriminate between truthful and fabricated statements. The 2016 study found the same. Raters trained in SCAN evaluated 234 statements — 117 true, 117 false. Their results in trying to separate fact from fiction were about the same as chance….

Steven Drizin, a Northwestern University law professor who specializes in wrongful convictions, said SCAN and assorted other lie-detection tools suffer from “over-claim syndrome” — big claims made without scientific grounding. Asked why police would trust such tools, Drizin said: “A lot has to do with hubris — a belief on the part of police officers that they can tell when someone is lying to them with a high degree of accuracy. These tools play in to that belief and confirm that belief.”

SCAN’s creator “declined to be interviewed for this story,” but they spoke to some users of the technique. Travis Marsh, the head of an Indiana sheriff’s department, has been using the tool for nearly two decades, while acknowledging that he can’t explain how it works. “It really is, for lack of a better term, a faith-based system because you can’t see behind the curtain.”

Pro Publica also reports that “Years ago his wife left a note saying she and the kids were off doing one thing, whereas Marsh, analyzing her writing, could tell they had actually gone shopping. His wife has not left him another note in at least 15 years…”

Scientists Propose Destroying Mountains To Build a New Type of Battery For Long-Term Energy Storage

One of the big challenges of making 100 percent renewable energy [sic] a reality is long-term storage,” says Julian Hunt, an engineering scientist at the International Institute for Applied Systems Analysis in Austria. Hunt and his collaborators have devised a novel system to complement lithium-ion battery use for energy storage over the long run: Mountain Gravity Energy Storage, or MGES for short. Similar to hydroelectric power, MGES involves storing material at elevation to produce gravitational energy. The energy is recovered when the stored material falls and turns turbines to generate electricity. The group describes its system in a paper published November 6 in Energy.

“Instead of building a dam, we propose building a big sand or gravel reservoir,” explains Hunt. The key to MGES lies in finding two mountaintop sites that have a suitable difference in elevation — 1,000 meters is ideal. “The greater the height difference, the cheaper the technology,” he says. The sites will look similar, with each comprised of a mine-like station to store the sand or gravel, and a filling station directly below it. Valves release the material into waiting vessels, which are then transported via cranes and motor-run cables to the upper site. There, the sand or gravel is stored — for weeks, months, or even years — until it’s ready to be used. When the material is moved back down the mountain, that stored gravitational energy is released and converted into electrical energy.

Not only is the system more environmentally friendly [sic] than pumped-storage hydropower and dams, but it’s more flexible to meet varying energy demands.

“Hunt estimates that the annual cost of storing energy via this system will vary between $50 to $100 per megawatt hour (MWh),” the report adds. “And he says that the energy expended to transport materials to the upper sits will be offset by the amount of gravitational energy the system produces.”

Deepfake Porn Is Total Control Over Women’s Bodies

A lineup of female celebrities stand in front of you. Their faces move, smile, and blink as you move around them. They’re fully nude, hairless, waiting for you to decide what you’ll do to them as you peruse a menu of sex positions. This isn’t just another deepfake porn video, or the kind of interactive, 3D-generated porn Motherboard reported on last month, but a hybrid of both which gives people even more control of women’s virtual bodies. This new type of nonconsensual porn uses custom 3D models that can be articulated and animated, which are then made to look exactly like specific celebrities with deepfaked faces. Until recently, deepfake porn consisted of taking the face of a person — usually a celebrity, almost always a woman — and swapping it on to the face of an adult performer in an existing porn video. With this method, a user can make a 3D avatar with a generic face, capture footage of it performing any kind of sexual act, then run that video through an algorithm that swaps the generic face with a real person’s.

Keep Your IoT Devices on a Separate Network, FBI Says

The FBI says owners of IoT (Internet of Things) devices should isolate this equipment on a separate WiFi network, different from the one they’re using for their primary devices, such as laptops, desktops, or smartphones. “Your fridge and your laptop should not be on the same network,” the FBI’s Portland office said in a weekly tech advice column. “Keep your most private, sensitive data on a separate system from your other IoT devices,” it added. The same advice — to keep devices on a separate WiFi network or LAN — has been shared in the past by multiple IT and security experts. The reasoning behind it is simple. By keeping all the IoT equipment on a separate network, any compromise of a “smart” device will not grant an attacker a direct route to a user’s primary devices — where most of their data is stored. Jumping across the two networks would require considerable effort from the attacker. However, placing primary devices and IoT devices on separate networks might not sound that easy for non-technical users. The simplest way is to use two routers.

A Billion Surveillance Cameras Forecast To Be Watching Within Two Years

As governments and companies invest more in security networks, hundreds of millions more surveillance cameras will be watching the world in 2021, mostly in China, according to a new report. The report, from industry researcher IHS Market, to be released Thursday, said the number of cameras used for surveillance would climb above 1 billion by the end of 2021. That would represent an almost 30% increase from the 770 million cameras today. China would continue to account for a little over half the total. Fast-growing, populous nations such as India, Brazil and Indonesia would also help drive growth in the sector, the report said. IHS analyst Oliver Philippou said government programs to implement widespread video surveillance to monitor the public would be the biggest catalyst for the growth in China. City surveillance also was driving demand elsewhere.

The US Is Suspected of Killing a Terrorist In Syria Using Missile With Knife Warhead

A suspected terrorist in Syria was reportedly killed with a rare U.S. missile packed with swords, according to multiple reports. The weapon that shredded the car did not explode. While the driver’s side was torn apart, the vehicle was actually mostly intact. The deadly precision weapon was, according to a report from the Wall Street Journal in May, designed by the U.S. to reduce civilian casualties. The Journal noted that the R9X has been used covertly, albeit rarely, against targets in Syria, Yemen and elsewhere since 2017.

US Police Already Using ‘Spot’ Robot From Boston Dynamics In the Real World

Massachusetts State Police (MSP) has been quietly testing ways to use the four-legged Boston Dynamics robot known as Spot, according to new documents obtained by the American Civil Liberties Union of Massachusetts. And while Spot isn’t equipped with a weapon just yet, the documents provide a terrifying peek at our RoboCop future.

The Spot robot, which was officially made available for lease to businesses last month, has been in use by MSP since at least April 2019 and has engaged in at least two police “incidents,” though it’s not clear what those incidents may have been. It’s also not clear whether the robots were being operated by a human controller or how much autonomous action the robots are allowed. MSP did not respond to Gizmodo’s emails on Monday morning.

The newly obtained documents, first reported by Ally Jarmanning at WBUR in Boston, include emails and contracts that shed some light on how police departments of the future may use robots to engage suspects without putting human police in harm’s way. In one document written by Lt. Robert G. Schumaker robots are described as an “invaluable component of tactical operations” that are vital to support the state’s “Homeland Security Strategy.” […] The question that remains is whether the American public will simply accept robocops as our reality now. Unfortunately, it seems like we may not have any choice in the matter — especially when the only way that we can learn about this new robot-police partnership is through records requests by the ACLU. And even then, we’re still largely in the dark about how these things will be used.

Police Can Keep Ring Camera Video Forever, and Share With Whomever They’d Like

Police officers who download videos captured by homeowners’ Ring doorbell cameras can keep them forever and share them with whomever they’d like without providing evidence of a crime, the Amazon-owned firm told a lawmaker this month… Police in those communities can use Ring software to request up to 12 hours of video from anyone within half a square mile of a suspected crime scene, covering a 45-day time span, wrote Brian Huseman, Amazon’s vice president of public policy. Police are required to include a case number for the crime they are investigating, but not any other details or evidence related to the crime or their request.

Sen. Edward Markey, D-Mass., said in a statement that Ring’s policies showed that the company had failed to enact basic safeguards to protect Americans’ privacy. “Connected doorbells are well on their way to becoming a mainstay of American households, and the lack of privacy and civil rights protections for innocent residents is nothing short of chilling,” he said. “If you’re an adult walking your dog or a child playing on the sidewalk, you shouldn’t have to worry that Ring’s products are amassing footage of you and that law enforcement may hold that footage indefinitely or share that footage with any third parties.”

While Ring tells users not to film public roads are sidewalks, Ring isn’t enforcing that, according to the article. Amazon argues that that’s ultimately the user’s responsibility.

And will their cameras start using facial recognition algorithms? Amazon answers that that feature is “contemplated but unreleased,” though they add that “We do frequently innovate based on customer demand,” and point out that other competing security cameras are already offering facial-recognition.

Now Even the FBI is Warning About Your Smart TV’s Security

Smart TVs are like regular television sets but with an internet connection. With the advent and growth of Netflix, Hulu and other streaming services, most saw internet-connected televisions as a cord-cutter’s dream. But like anything that connects to the internet, it opens up smart TVs to security vulnerabilities and hackers. Not only that, many smart TVs come with a camera and a microphone. But as is the case with most other internet-connected devices, manufacturers often don’t put security as a priority. That’s the key takeaway from the FBI’s Portland field office, which just ahead of some of the biggest shopping days of the year posted a warning on its website about the risks that smart TVs pose. “Beyond the risk that your TV manufacturer and app developers may be listening and watching you, that television can also be a gateway for hackers to come into your home. A bad cyber actor may not be able to access your locked-down computer directly, but it is possible that your unsecured TV can give him or her an easy way in the backdoor through your router,” wrote the FBI. The FBI warned that hackers can take control of your unsecured smart TV and in worst cases, take control of the camera and microphone to watch and listen in.

Doctors Are Turning To YouTube To Learn How To Do Surgical Procedures

Some doctors say that medical students and residents are turning to YouTube to fill in gaps in their training. The video-sharing platform hosts tens of thousands of surgery-related videos, and the number keeps climbing every year.

People have livestreamed giving birth and broadcast their face-lifts. One video, which shows the removal of a dense, white cataract, has gone somewhat viral and now has more than 1.7 million views. Others seem to have found crossover appeal with nonmedical viewers, such as a video from the U.K.-based group Audiology Associates showing a weirdly satisfying removal of a giant glob of earwax. Doctors are uploading these videos to market themselves or to help others in the field, and the amount is growing by leaps and bounds. Researchers in January found more than 20,000 videos related to prostate surgery alone, compared with just 500 videos in 2009.

The videos are a particular boon for doctors in training. When the University of Iowa surveyed its surgeons, including its fourth-year medical students and residents, it found that YouTube was the most-used video source for surgical preparation by far. But residents and medical students are not the only ones tuning in. Experienced doctors, like Stanford Hospital’s vascular surgeon Dr. Oliver Aalami said he turned to YouTube recently ahead of a particularly difficult exposure. There’s one problem with this practice that will be familiar to anybody who’s searched YouTube for tips on more mundane tasks like household repairs. How can doctors tell which videos are valid and which contain bogus information?

“[O]ne recent study found more than 68,000 videos associated with a common procedure known as a distal radius fracture immobilization,” the report adds. “The researchers evaluated the content for their technical skill demonstrated and educational skill, and created a score. Only 16 of the videos even met basic criteria, including whether they were performed by a health-care professional or institution. Among those, the scores were mixed. In several cases, the credentials of the person performing the procedure could not be identified at all.”

Other studies are finding that YouTube’s algorithm is highly ranking videos where the technique isn’t optimal.

Most Americans Think They’re Being Constantly Tracked, Study Finds

More than 60% of Americans think it’s impossible to go through daily life without being tracked by companies or the government, according to a new Pew Research study. It’s not just that Americans (correctly) think companies are collecting their data. They don’t like it. About 69% of Americans are skeptical that companies will use their private information in a way they’re comfortable with, while 79% don’t believe that companies will come clean if they misuse the information. When it comes to who they trust, there are differences by race. About 73% of black Americans, for instance, are at least a little worried about what law enforcement knows about them, compared with 56% of white Americans. But among all respondents, more than 80% were concerned about what social-media sites and advertisers might know. Despite these concerns, more than 80% of Americans feel they have no control over how their information is collected.

Facebook, Google Donate Heavily To Privacy Advocacy Groups

Few companies have more riding on proposed privacy legislation than Alphabet’s Google and Facebook. To try to steer the bill their way, the giant advertising technology companies spend millions of dollars to lobby each year, a fact confirmed by government filings. Not so well-documented is spending to support highly influential think tanks and public interest groups that are helping shape the privacy debate, ostensibly as independent observers. Bloomberg Law examined seven prominent nonprofit think tanks that work on privacy issues that received a total of $1.5 million over a 18-month period ending Dec. 31, 2018. The groups included such organizations as the Center for Democracy and Technology, the Future of Privacy Forum and the Brookings Institution. The actual total is undoubtedly much higher — exact totals for contributions were difficult to pin down. The tech giants have “funded scores of nonprofits, including consumer and privacy groups, and academics,” said Jeffrey Chester, executive director at the Center for Digital Democracy, a public interest group that does not accept donations from Google or Facebook. Further, he says, their influence is strong. The companies have “opposed federal privacy laws and worked to weaken existing safeguards,” Chester said. Accepting donations from these “privacy-killing companies enable them to influence decisions by nonprofits, even subtly,” he said.

College Students Say Ditching Their Smartphones For a Week Changed Their Lives

It was part of a college course intended to break the powerful addiction of smartphones… an Adelphi University course called “Life Unplugged” where students did the unthinkable one week ago — handed over their smartphones. “I’m freaking out, I could probably cry right now,” one student said. It was a bold experiment to recognize today’s compulsive relationships with ever present devices. Seven days later, “who’s excited they’re getting their phones back today?” Professor Donna Freitas asked.

Gone were the nerves and the shakes. “Everything is perfect right now. I’m having a lot better relationships… it’s a stress free environment no pressure about social media,” Jacob Dannenberg said.

“I think it’s really refreshing and relaxing… I was able to fall asleep a lot easier,” student Adrianna Cigliano.

They managed to find their way, even without GPS for a week. “I just had to take the same route everywhere,” one student joked. They were also more productive. “Doing homework was 100 percent easier. I got it done faster, I was in the zone,” Cigliano said.

Prof. Freitas says it’s important for everyone to assess their addiction. “Are the conveniences worth it because the drawback are pretty significant,” Freitas said. “The face that no one can focus, that my students can’t sleep… They feel bad about themselves because of social media, the list goes on and on.”

Nearly Half of Parents Worry Their Child Is Addicted To Mobile Devices, Study Finds

According to a new survey from Common Sense Media and SurveyMonkey, 47% of parents worry their child is addicted to their mobile device. By comparison, only 32% of parents say they’re addicted themselves. USA Today reports: Half of parents also say they are at least somewhat concerned about how mobile devices will affect their kids’ mental health. Nearly one in five say they’re “extremely” or “very” concerned. According to the survey, 89% of parents believe it’s up to them to curb their children’s smartphone usage. The survey conducted between Jan. 25 and Jan. 29 included a sample of 4,201 adults, including 1,024 parents with children under age 18. Data was weighted to reflect the demographic composition of the U.S. for adults over 18, based on Census data. Many devices and services feature parental controls, but some parents may not be aware they exist. The Common Sense-SurveyMonkey survey found 22% of parents did not know YouTube — which has faced scrutiny over how easy it is for kids to find inappropriate videos — offered parental controls. Also, 37% have not used the controls before. Among parents surveyed who say their kids watch YouTube videos, 62% said their kids have seen inappropriate videos on the site. Most, or 81%, said it’s the parents’ job to prevent kids from seeing these videos.

NYPD Kept an Illegal Database of Juvenile Fingerprints For Years

For years, the New York Police Department illegally maintained a database containing the fingerprints of thousands of children charged as juvenile delinquents–in direct violation of state law mandating that police destroy these records after turning them over to the state’s Division of Criminal Justice Services. When lawyers representing some of those youths discovered the violation, the police department dragged its feet, at first denying but eventually admitting that it was retaining prints it was supposed to have destroyed. Since 2015, attorneys with the Legal Aid Society, which represents the majority of youths charged in New York City family courts, had been locked in a battle with the police department over retention of the fingerprint records of children under the age of 16. The NYPD did not answer questions from The Intercept about its handling of the records, but according to Legal Aid, the police department confirmed to the organization last week that the database had been destroyed. To date, the department has made no public admission of wrongdoing, nor has it notified the thousands of people it impacted, although it has changed its fingerprint retention practices following Legal Aid’s probing. “The NYPD can confirm that the department destroys juvenile delinquent fingerprints after the prints have been transmitted to DCJS,” a police spokesperson wrote in a statement to The Intercept.

Still, the way the department handled the process–resisting transparency and stalling even after being threatened with legal action–raises concerns about how police handle a growing number of databases of personal information, including DNA and data obtained through facial recognition technology. As The Intercept has reported extensively, the NYPD also maintains a secretive and controversial “gang database,” which labels thousands of unsuspecting New Yorkers–almost all black or Latino youth–as “gang members” based on a set of broad and arbitrary criteria. The fact that police were able to violate the law around juvenile fingerprints for years without consequence underscores the need for greater transparency and accountability, which critics say can only come from independent oversight of the department.

It’s unclear how long the NYPD was illegally retaining these fingerprints, but the report says the state has been using the Automated Fingerprint Identification System since 1989, “and laws protecting juvenile delinquent records have been in place since at least 1977.” Legal Aid lawyers estimate that tens of thousands of juveniles could have had their fingerprints illegally retained by police.