Resources

Chinese Newspaper Touts Videogame Where Players ‘Hunt Down Traitors’ in Hong Kong

An online game calling on players to hunt down traitors who seek to separate Hong Kong from China and fuel street violence has reportedly begun to attract players across Chinese mainland social media platforms. The game, “fight the traitors together,” is set against the backdrop of the social unrest that has persisted in Hong Kong. The script asks the player to find eight secessionists hidden in the crowd participating in Hong Kong protests.

Players can knock them down with slaps or rotten eggs until they are captured. Online gamers claim the game allows them to vent their anger at the separatist behavior of secessionists during the recent Hong Kong riots. The eight traitors in the game, caricatured based on real people, include Jimmy Lai Chee-ying, Martin Lee Chu-ming and Joshua Wong Chi-fung, prominent opposition figures who have played a major role in inciting unrest in Hong Kong. There are also traitor figures in ancient China…

In the game, amid a crowd of black-clad rioters wearing yellow hats and face masks, Anson Chan Fang On-sang, another leading opposition figure, carries a bag with a U.S. flag, clutches a stack of U.S. dollars and holds a loudspeaker to incite violence in the streets.

Cops Around the World Are Using An Outlandish Mind-Reading Tool

ProPublica reports that dozens of state and local agencies have purchased “SCAN” training from a company called LSI for reviewing a suspect’s written statements — even though there’s no scientific evidence that it works.

Local, state and federal agencies from the Louisville Metro Police Department to the Michigan State Police to the U.S. State Department have paid for SCAN training. The LSI website lists 417 agencies nationwide, from small-town police departments to the military, that have been trained in SCAN — and that list isn’t comprehensive, because additional ones show up in procurement databases and in public records obtained by ProPublica. Other training recipients include law enforcement agencies in Australia, Belgium, Canada, Israel, Mexico, the Netherlands, Singapore, South Africa and the United Kingdom, among others…

For Avinoam Sapir, the creator of SCAN, sifting truth from deception is as simple as one, two, three.

1. Give the subject a pen and paper.
2. Ask the subject to write down his/her version of what happened.
3. Analyze the statement and solve the case.

Those steps appear on the website for Sapir’s company, based in Phoenix. “SCAN Unlocks the Mystery!” the homepage says, alongside a logo of a question mark stamped on someone’s brain. The site includes dozens of testimonials with no names attached. “Since January when I first attended your course, everybody I meet just walks up to me and confesses!” one says. Another testimonial says “The Army finally got its money’s worth…” SCAN saves time, the site says. It saves money. Police can fax a questionnaire to a hundred people at once, the site says. Those hundred people can fax it back “and then, in less than an hour, the investigator will be able to review the questionnaires and solve the case.”

In 2009 the U.S. government created a special interagency task force to review scientific studies and independently investigate which interrogation techniques worked, assessed by the FBI, CIA and the U.S. Department of Defense. “When all 12 SCAN criteria were used in a laboratory study, SCAN did not distinguish truth-tellers from liars above the level of chance,” the review said, also challenging two of the method’s 12 criteria. “Both gaps in memory and spontaneous corrections have been shown to be indicators of truth, contrary to what is claimed by SCAN.”
In a footnote, the review identified three specific agencies that use SCAN: the FBI, CIA and U.S. Army military intelligence, which falls under the Department of Defense…

In 2016, the same year the federal task force released its review of interrogation techniques, four scholars published a study on SCAN in the journal Frontiers in Psychology. The authors — three from the Netherlands, one from England — noted that there had been only four prior studies in peer-reviewed journals on SCAN’s effectiveness. Each of those studies (in 1996, 2012, 2014 and 2015) concluded that SCAN failed to help discriminate between truthful and fabricated statements. The 2016 study found the same. Raters trained in SCAN evaluated 234 statements — 117 true, 117 false. Their results in trying to separate fact from fiction were about the same as chance….

Steven Drizin, a Northwestern University law professor who specializes in wrongful convictions, said SCAN and assorted other lie-detection tools suffer from “over-claim syndrome” — big claims made without scientific grounding. Asked why police would trust such tools, Drizin said: “A lot has to do with hubris — a belief on the part of police officers that they can tell when someone is lying to them with a high degree of accuracy. These tools play in to that belief and confirm that belief.”

SCAN’s creator “declined to be interviewed for this story,” but they spoke to some users of the technique. Travis Marsh, the head of an Indiana sheriff’s department, has been using the tool for nearly two decades, while acknowledging that he can’t explain how it works. “It really is, for lack of a better term, a faith-based system because you can’t see behind the curtain.”

Pro Publica also reports that “Years ago his wife left a note saying she and the kids were off doing one thing, whereas Marsh, analyzing her writing, could tell they had actually gone shopping. His wife has not left him another note in at least 15 years…”

Scientists Propose Destroying Mountains To Build a New Type of Battery For Long-Term Energy Storage

One of the big challenges of making 100 percent renewable energy [sic] a reality is long-term storage,” says Julian Hunt, an engineering scientist at the International Institute for Applied Systems Analysis in Austria. Hunt and his collaborators have devised a novel system to complement lithium-ion battery use for energy storage over the long run: Mountain Gravity Energy Storage, or MGES for short. Similar to hydroelectric power, MGES involves storing material at elevation to produce gravitational energy. The energy is recovered when the stored material falls and turns turbines to generate electricity. The group describes its system in a paper published November 6 in Energy.

“Instead of building a dam, we propose building a big sand or gravel reservoir,” explains Hunt. The key to MGES lies in finding two mountaintop sites that have a suitable difference in elevation — 1,000 meters is ideal. “The greater the height difference, the cheaper the technology,” he says. The sites will look similar, with each comprised of a mine-like station to store the sand or gravel, and a filling station directly below it. Valves release the material into waiting vessels, which are then transported via cranes and motor-run cables to the upper site. There, the sand or gravel is stored — for weeks, months, or even years — until it’s ready to be used. When the material is moved back down the mountain, that stored gravitational energy is released and converted into electrical energy.

Not only is the system more environmentally friendly [sic] than pumped-storage hydropower and dams, but it’s more flexible to meet varying energy demands.

“Hunt estimates that the annual cost of storing energy via this system will vary between $50 to $100 per megawatt hour (MWh),” the report adds. “And he says that the energy expended to transport materials to the upper sits will be offset by the amount of gravitational energy the system produces.”

Deepfake Porn Is Total Control Over Women’s Bodies

A lineup of female celebrities stand in front of you. Their faces move, smile, and blink as you move around them. They’re fully nude, hairless, waiting for you to decide what you’ll do to them as you peruse a menu of sex positions. This isn’t just another deepfake porn video, or the kind of interactive, 3D-generated porn Motherboard reported on last month, but a hybrid of both which gives people even more control of women’s virtual bodies. This new type of nonconsensual porn uses custom 3D models that can be articulated and animated, which are then made to look exactly like specific celebrities with deepfaked faces. Until recently, deepfake porn consisted of taking the face of a person — usually a celebrity, almost always a woman — and swapping it on to the face of an adult performer in an existing porn video. With this method, a user can make a 3D avatar with a generic face, capture footage of it performing any kind of sexual act, then run that video through an algorithm that swaps the generic face with a real person’s.

Keep Your IoT Devices on a Separate Network, FBI Says

The FBI says owners of IoT (Internet of Things) devices should isolate this equipment on a separate WiFi network, different from the one they’re using for their primary devices, such as laptops, desktops, or smartphones. “Your fridge and your laptop should not be on the same network,” the FBI’s Portland office said in a weekly tech advice column. “Keep your most private, sensitive data on a separate system from your other IoT devices,” it added. The same advice — to keep devices on a separate WiFi network or LAN — has been shared in the past by multiple IT and security experts. The reasoning behind it is simple. By keeping all the IoT equipment on a separate network, any compromise of a “smart” device will not grant an attacker a direct route to a user’s primary devices — where most of their data is stored. Jumping across the two networks would require considerable effort from the attacker. However, placing primary devices and IoT devices on separate networks might not sound that easy for non-technical users. The simplest way is to use two routers.

A Billion Surveillance Cameras Forecast To Be Watching Within Two Years

As governments and companies invest more in security networks, hundreds of millions more surveillance cameras will be watching the world in 2021, mostly in China, according to a new report. The report, from industry researcher IHS Market, to be released Thursday, said the number of cameras used for surveillance would climb above 1 billion by the end of 2021. That would represent an almost 30% increase from the 770 million cameras today. China would continue to account for a little over half the total. Fast-growing, populous nations such as India, Brazil and Indonesia would also help drive growth in the sector, the report said. IHS analyst Oliver Philippou said government programs to implement widespread video surveillance to monitor the public would be the biggest catalyst for the growth in China. City surveillance also was driving demand elsewhere.

The US Is Suspected of Killing a Terrorist In Syria Using Missile With Knife Warhead

A suspected terrorist in Syria was reportedly killed with a rare U.S. missile packed with swords, according to multiple reports. The weapon that shredded the car did not explode. While the driver’s side was torn apart, the vehicle was actually mostly intact. The deadly precision weapon was, according to a report from the Wall Street Journal in May, designed by the U.S. to reduce civilian casualties. The Journal noted that the R9X has been used covertly, albeit rarely, against targets in Syria, Yemen and elsewhere since 2017.

US Police Already Using ‘Spot’ Robot From Boston Dynamics In the Real World

Massachusetts State Police (MSP) has been quietly testing ways to use the four-legged Boston Dynamics robot known as Spot, according to new documents obtained by the American Civil Liberties Union of Massachusetts. And while Spot isn’t equipped with a weapon just yet, the documents provide a terrifying peek at our RoboCop future.

The Spot robot, which was officially made available for lease to businesses last month, has been in use by MSP since at least April 2019 and has engaged in at least two police “incidents,” though it’s not clear what those incidents may have been. It’s also not clear whether the robots were being operated by a human controller or how much autonomous action the robots are allowed. MSP did not respond to Gizmodo’s emails on Monday morning.

The newly obtained documents, first reported by Ally Jarmanning at WBUR in Boston, include emails and contracts that shed some light on how police departments of the future may use robots to engage suspects without putting human police in harm’s way. In one document written by Lt. Robert G. Schumaker robots are described as an “invaluable component of tactical operations” that are vital to support the state’s “Homeland Security Strategy.” […] The question that remains is whether the American public will simply accept robocops as our reality now. Unfortunately, it seems like we may not have any choice in the matter — especially when the only way that we can learn about this new robot-police partnership is through records requests by the ACLU. And even then, we’re still largely in the dark about how these things will be used.

Police Can Keep Ring Camera Video Forever, and Share With Whomever They’d Like

Police officers who download videos captured by homeowners’ Ring doorbell cameras can keep them forever and share them with whomever they’d like without providing evidence of a crime, the Amazon-owned firm told a lawmaker this month… Police in those communities can use Ring software to request up to 12 hours of video from anyone within half a square mile of a suspected crime scene, covering a 45-day time span, wrote Brian Huseman, Amazon’s vice president of public policy. Police are required to include a case number for the crime they are investigating, but not any other details or evidence related to the crime or their request.

Sen. Edward Markey, D-Mass., said in a statement that Ring’s policies showed that the company had failed to enact basic safeguards to protect Americans’ privacy. “Connected doorbells are well on their way to becoming a mainstay of American households, and the lack of privacy and civil rights protections for innocent residents is nothing short of chilling,” he said. “If you’re an adult walking your dog or a child playing on the sidewalk, you shouldn’t have to worry that Ring’s products are amassing footage of you and that law enforcement may hold that footage indefinitely or share that footage with any third parties.”

While Ring tells users not to film public roads are sidewalks, Ring isn’t enforcing that, according to the article. Amazon argues that that’s ultimately the user’s responsibility.

And will their cameras start using facial recognition algorithms? Amazon answers that that feature is “contemplated but unreleased,” though they add that “We do frequently innovate based on customer demand,” and point out that other competing security cameras are already offering facial-recognition.

Now Even the FBI is Warning About Your Smart TV’s Security

Smart TVs are like regular television sets but with an internet connection. With the advent and growth of Netflix, Hulu and other streaming services, most saw internet-connected televisions as a cord-cutter’s dream. But like anything that connects to the internet, it opens up smart TVs to security vulnerabilities and hackers. Not only that, many smart TVs come with a camera and a microphone. But as is the case with most other internet-connected devices, manufacturers often don’t put security as a priority. That’s the key takeaway from the FBI’s Portland field office, which just ahead of some of the biggest shopping days of the year posted a warning on its website about the risks that smart TVs pose. “Beyond the risk that your TV manufacturer and app developers may be listening and watching you, that television can also be a gateway for hackers to come into your home. A bad cyber actor may not be able to access your locked-down computer directly, but it is possible that your unsecured TV can give him or her an easy way in the backdoor through your router,” wrote the FBI. The FBI warned that hackers can take control of your unsecured smart TV and in worst cases, take control of the camera and microphone to watch and listen in.

Doctors Are Turning To YouTube To Learn How To Do Surgical Procedures

Some doctors say that medical students and residents are turning to YouTube to fill in gaps in their training. The video-sharing platform hosts tens of thousands of surgery-related videos, and the number keeps climbing every year.

People have livestreamed giving birth and broadcast their face-lifts. One video, which shows the removal of a dense, white cataract, has gone somewhat viral and now has more than 1.7 million views. Others seem to have found crossover appeal with nonmedical viewers, such as a video from the U.K.-based group Audiology Associates showing a weirdly satisfying removal of a giant glob of earwax. Doctors are uploading these videos to market themselves or to help others in the field, and the amount is growing by leaps and bounds. Researchers in January found more than 20,000 videos related to prostate surgery alone, compared with just 500 videos in 2009.

The videos are a particular boon for doctors in training. When the University of Iowa surveyed its surgeons, including its fourth-year medical students and residents, it found that YouTube was the most-used video source for surgical preparation by far. But residents and medical students are not the only ones tuning in. Experienced doctors, like Stanford Hospital’s vascular surgeon Dr. Oliver Aalami said he turned to YouTube recently ahead of a particularly difficult exposure. There’s one problem with this practice that will be familiar to anybody who’s searched YouTube for tips on more mundane tasks like household repairs. How can doctors tell which videos are valid and which contain bogus information?

“[O]ne recent study found more than 68,000 videos associated with a common procedure known as a distal radius fracture immobilization,” the report adds. “The researchers evaluated the content for their technical skill demonstrated and educational skill, and created a score. Only 16 of the videos even met basic criteria, including whether they were performed by a health-care professional or institution. Among those, the scores were mixed. In several cases, the credentials of the person performing the procedure could not be identified at all.”

Other studies are finding that YouTube’s algorithm is highly ranking videos where the technique isn’t optimal.

Most Americans Think They’re Being Constantly Tracked, Study Finds

More than 60% of Americans think it’s impossible to go through daily life without being tracked by companies or the government, according to a new Pew Research study. It’s not just that Americans (correctly) think companies are collecting their data. They don’t like it. About 69% of Americans are skeptical that companies will use their private information in a way they’re comfortable with, while 79% don’t believe that companies will come clean if they misuse the information. When it comes to who they trust, there are differences by race. About 73% of black Americans, for instance, are at least a little worried about what law enforcement knows about them, compared with 56% of white Americans. But among all respondents, more than 80% were concerned about what social-media sites and advertisers might know. Despite these concerns, more than 80% of Americans feel they have no control over how their information is collected.

Facebook, Google Donate Heavily To Privacy Advocacy Groups

Few companies have more riding on proposed privacy legislation than Alphabet’s Google and Facebook. To try to steer the bill their way, the giant advertising technology companies spend millions of dollars to lobby each year, a fact confirmed by government filings. Not so well-documented is spending to support highly influential think tanks and public interest groups that are helping shape the privacy debate, ostensibly as independent observers. Bloomberg Law examined seven prominent nonprofit think tanks that work on privacy issues that received a total of $1.5 million over a 18-month period ending Dec. 31, 2018. The groups included such organizations as the Center for Democracy and Technology, the Future of Privacy Forum and the Brookings Institution. The actual total is undoubtedly much higher — exact totals for contributions were difficult to pin down. The tech giants have “funded scores of nonprofits, including consumer and privacy groups, and academics,” said Jeffrey Chester, executive director at the Center for Digital Democracy, a public interest group that does not accept donations from Google or Facebook. Further, he says, their influence is strong. The companies have “opposed federal privacy laws and worked to weaken existing safeguards,” Chester said. Accepting donations from these “privacy-killing companies enable them to influence decisions by nonprofits, even subtly,” he said.

College Students Say Ditching Their Smartphones For a Week Changed Their Lives

It was part of a college course intended to break the powerful addiction of smartphones… an Adelphi University course called “Life Unplugged” where students did the unthinkable one week ago — handed over their smartphones. “I’m freaking out, I could probably cry right now,” one student said. It was a bold experiment to recognize today’s compulsive relationships with ever present devices. Seven days later, “who’s excited they’re getting their phones back today?” Professor Donna Freitas asked.

Gone were the nerves and the shakes. “Everything is perfect right now. I’m having a lot better relationships… it’s a stress free environment no pressure about social media,” Jacob Dannenberg said.

“I think it’s really refreshing and relaxing… I was able to fall asleep a lot easier,” student Adrianna Cigliano.

They managed to find their way, even without GPS for a week. “I just had to take the same route everywhere,” one student joked. They were also more productive. “Doing homework was 100 percent easier. I got it done faster, I was in the zone,” Cigliano said.

Prof. Freitas says it’s important for everyone to assess their addiction. “Are the conveniences worth it because the drawback are pretty significant,” Freitas said. “The face that no one can focus, that my students can’t sleep… They feel bad about themselves because of social media, the list goes on and on.”

Nearly Half of Parents Worry Their Child Is Addicted To Mobile Devices, Study Finds

According to a new survey from Common Sense Media and SurveyMonkey, 47% of parents worry their child is addicted to their mobile device. By comparison, only 32% of parents say they’re addicted themselves. USA Today reports: Half of parents also say they are at least somewhat concerned about how mobile devices will affect their kids’ mental health. Nearly one in five say they’re “extremely” or “very” concerned. According to the survey, 89% of parents believe it’s up to them to curb their children’s smartphone usage. The survey conducted between Jan. 25 and Jan. 29 included a sample of 4,201 adults, including 1,024 parents with children under age 18. Data was weighted to reflect the demographic composition of the U.S. for adults over 18, based on Census data. Many devices and services feature parental controls, but some parents may not be aware they exist. The Common Sense-SurveyMonkey survey found 22% of parents did not know YouTube — which has faced scrutiny over how easy it is for kids to find inappropriate videos — offered parental controls. Also, 37% have not used the controls before. Among parents surveyed who say their kids watch YouTube videos, 62% said their kids have seen inappropriate videos on the site. Most, or 81%, said it’s the parents’ job to prevent kids from seeing these videos.

NYPD Kept an Illegal Database of Juvenile Fingerprints For Years

For years, the New York Police Department illegally maintained a database containing the fingerprints of thousands of children charged as juvenile delinquents–in direct violation of state law mandating that police destroy these records after turning them over to the state’s Division of Criminal Justice Services. When lawyers representing some of those youths discovered the violation, the police department dragged its feet, at first denying but eventually admitting that it was retaining prints it was supposed to have destroyed. Since 2015, attorneys with the Legal Aid Society, which represents the majority of youths charged in New York City family courts, had been locked in a battle with the police department over retention of the fingerprint records of children under the age of 16. The NYPD did not answer questions from The Intercept about its handling of the records, but according to Legal Aid, the police department confirmed to the organization last week that the database had been destroyed. To date, the department has made no public admission of wrongdoing, nor has it notified the thousands of people it impacted, although it has changed its fingerprint retention practices following Legal Aid’s probing. “The NYPD can confirm that the department destroys juvenile delinquent fingerprints after the prints have been transmitted to DCJS,” a police spokesperson wrote in a statement to The Intercept.

Still, the way the department handled the process–resisting transparency and stalling even after being threatened with legal action–raises concerns about how police handle a growing number of databases of personal information, including DNA and data obtained through facial recognition technology. As The Intercept has reported extensively, the NYPD also maintains a secretive and controversial “gang database,” which labels thousands of unsuspecting New Yorkers–almost all black or Latino youth–as “gang members” based on a set of broad and arbitrary criteria. The fact that police were able to violate the law around juvenile fingerprints for years without consequence underscores the need for greater transparency and accountability, which critics say can only come from independent oversight of the department.

It’s unclear how long the NYPD was illegally retaining these fingerprints, but the report says the state has been using the Automated Fingerprint Identification System since 1989, “and laws protecting juvenile delinquent records have been in place since at least 1977.” Legal Aid lawyers estimate that tens of thousands of juveniles could have had their fingerprints illegally retained by police.

Facebook Says Government Demands For User Data Are at a Record High

Government demands for user data increased by 16% to 128,617 demands during the first-half of this year compared to the second-half of last year. That’s the highest number of government demands its received in any reporting period since it published its first transparency report in 2013. The U.S. government led the way with the most number of requests–50,741 demands for user data resulting in some account or user data given to authorities in 88% of cases. Facebook said two-thirds of all of the U.S. government’s requests came with a gag order, preventing the company from telling the user about the request for their data. But Facebook said it was able to release details of 11 so-called national security letters (NSLs) for the first time after their gag provisions were lifted during the period. National security letters can compel companies to turn over non-content data at the request of the FBI. These letters are not approved by a judge, and often come with a gag order preventing their disclosure. But since the Freedom Act passed in 2015, companies have been allowed to request the lifting of those gag orders.

Next in Google’s Quest for Consumer Dominance–Banking

The project, code-named Cache, is expected to launch next year with accounts run by Citigroup and a credit union at Stanford University, a tiny lender in Google’s backyard. Big tech companies see financial services as a way to get closer to users and glean valuable data. Apple introduced a credit card this summer. Amazon.com has talked to banks about offering checking accounts. Facebook is working on a digital currency it hopes will upend global payments. Their ambitions could challenge incumbent financial-services firms, which fear losing their primacy and customers. They are also likely to stoke a reaction in Washington, where regulators are already investigating whether large technology companies have too much clout.

The tie-ups between banking and technology have sometimes been fraught. Apple irked its credit-card partner, Goldman Sachs Group, by running ads that said the card was “designed by Apple, not a bank.” Major financial companies dropped out of Facebook’s crypto project after a regulatory backlash. Google’s approach seems designed to make allies, rather than enemies, in both camps. The financial institutions’ brands, not Google’s, will be front-and-center on the accounts, an executive told The Wall Street Journal. And Google will leave the financial plumbing and compliance to the banks — activities it couldn’t do without a license anyway.

Facebook Bug Has Camera Activated While People Are Using the App

When you’re scrolling through Facebook’s app, the social network could be watching you back, in more ways than just your data, concerned users have found. Multiple people have found and reported that their iPhone cameras were turned on in the background while looking at their feed. The issue came to light with several posts on Twitter, showing that their cameras were activated behind Facebook’s app as they were watching videos or looking at photos on the social network. After clicking on the video to full screen, returning it back to normal would create a bug where Facebook’s mobile layout was slightly shifted to the right. With the open space on the left, you could now see the phone’s camera activated in the background. This was documented in multiple cases, with the earliest incident on November 2.

Google’s Secret ‘Project Nightingale’ Gathers Personal Health Data on Millions of Americans

Google is teaming with one of the country’s largest health-care systems on a secret project to collect and crunch the detailed personal health information of millions of Americans across 21 states, WSJ reported Monday, citing people familiar with the matter and internal documents.

The initiative, code-named “Project Nightingale,” appears to be the largest in a series of efforts by Silicon Valley giants to gain access to personal health data and establish a toehold in the massive health-care industry. Amazon.com, Apple and Microsoft are also aggressively pushing into health care, though they haven’t yet struck deals of this scope. Google launched the effort last year with St. Louis-based Ascension, the country’s second-largest health system. The data involved in Project Nightingale includes lab results, doctor diagnoses and hospitalization records, among other categories, and amounts to a complete health history, complete with patient names and dates of birth.

Neither patients nor doctors have been notified. At least 150 Google employees already have access to much of the data on tens of millions of patients, according to a person familiar with the matter and the documents.

Google in this case is using the data in part to design new software, underpinned by advanced artificial intelligence and machine learning.

Google appears to be sharing information within Project Nightingale more broadly than in its other forays into health-care data. In September, Google announced a 10-year deal with the Mayo Clinic to store the hospital system’s genetic, medical and financial records.

Google co-founder Larry Page, in a 2014 interview, suggested that patients worried about the privacy of their medical records were too cautious. Mr. Page said: “We’re not really thinking about the tremendous good that can come from people sharing information with the right people in the right ways.”