Archives 2019

Now Even the FBI is Warning About Your Smart TV’s Security

Smart TVs are like regular television sets but with an internet connection. With the advent and growth of Netflix, Hulu and other streaming services, most saw internet-connected televisions as a cord-cutter’s dream. But like anything that connects to the internet, it opens up smart TVs to security vulnerabilities and hackers. Not only that, many smart TVs come with a camera and a microphone. But as is the case with most other internet-connected devices, manufacturers often don’t put security as a priority. That’s the key takeaway from the FBI’s Portland field office, which just ahead of some of the biggest shopping days of the year posted a warning on its website about the risks that smart TVs pose. “Beyond the risk that your TV manufacturer and app developers may be listening and watching you, that television can also be a gateway for hackers to come into your home. A bad cyber actor may not be able to access your locked-down computer directly, but it is possible that your unsecured TV can give him or her an easy way in the backdoor through your router,” wrote the FBI. The FBI warned that hackers can take control of your unsecured smart TV and in worst cases, take control of the camera and microphone to watch and listen in.

Doctors Are Turning To YouTube To Learn How To Do Surgical Procedures

Some doctors say that medical students and residents are turning to YouTube to fill in gaps in their training. The video-sharing platform hosts tens of thousands of surgery-related videos, and the number keeps climbing every year.

People have livestreamed giving birth and broadcast their face-lifts. One video, which shows the removal of a dense, white cataract, has gone somewhat viral and now has more than 1.7 million views. Others seem to have found crossover appeal with nonmedical viewers, such as a video from the U.K.-based group Audiology Associates showing a weirdly satisfying removal of a giant glob of earwax. Doctors are uploading these videos to market themselves or to help others in the field, and the amount is growing by leaps and bounds. Researchers in January found more than 20,000 videos related to prostate surgery alone, compared with just 500 videos in 2009.

The videos are a particular boon for doctors in training. When the University of Iowa surveyed its surgeons, including its fourth-year medical students and residents, it found that YouTube was the most-used video source for surgical preparation by far. But residents and medical students are not the only ones tuning in. Experienced doctors, like Stanford Hospital’s vascular surgeon Dr. Oliver Aalami said he turned to YouTube recently ahead of a particularly difficult exposure. There’s one problem with this practice that will be familiar to anybody who’s searched YouTube for tips on more mundane tasks like household repairs. How can doctors tell which videos are valid and which contain bogus information?

“[O]ne recent study found more than 68,000 videos associated with a common procedure known as a distal radius fracture immobilization,” the report adds. “The researchers evaluated the content for their technical skill demonstrated and educational skill, and created a score. Only 16 of the videos even met basic criteria, including whether they were performed by a health-care professional or institution. Among those, the scores were mixed. In several cases, the credentials of the person performing the procedure could not be identified at all.”

Other studies are finding that YouTube’s algorithm is highly ranking videos where the technique isn’t optimal.

Most Americans Think They’re Being Constantly Tracked, Study Finds

More than 60% of Americans think it’s impossible to go through daily life without being tracked by companies or the government, according to a new Pew Research study. It’s not just that Americans (correctly) think companies are collecting their data. They don’t like it. About 69% of Americans are skeptical that companies will use their private information in a way they’re comfortable with, while 79% don’t believe that companies will come clean if they misuse the information. When it comes to who they trust, there are differences by race. About 73% of black Americans, for instance, are at least a little worried about what law enforcement knows about them, compared with 56% of white Americans. But among all respondents, more than 80% were concerned about what social-media sites and advertisers might know. Despite these concerns, more than 80% of Americans feel they have no control over how their information is collected.

Facebook, Google Donate Heavily To Privacy Advocacy Groups

Few companies have more riding on proposed privacy legislation than Alphabet’s Google and Facebook. To try to steer the bill their way, the giant advertising technology companies spend millions of dollars to lobby each year, a fact confirmed by government filings. Not so well-documented is spending to support highly influential think tanks and public interest groups that are helping shape the privacy debate, ostensibly as independent observers. Bloomberg Law examined seven prominent nonprofit think tanks that work on privacy issues that received a total of $1.5 million over a 18-month period ending Dec. 31, 2018. The groups included such organizations as the Center for Democracy and Technology, the Future of Privacy Forum and the Brookings Institution. The actual total is undoubtedly much higher — exact totals for contributions were difficult to pin down. The tech giants have “funded scores of nonprofits, including consumer and privacy groups, and academics,” said Jeffrey Chester, executive director at the Center for Digital Democracy, a public interest group that does not accept donations from Google or Facebook. Further, he says, their influence is strong. The companies have “opposed federal privacy laws and worked to weaken existing safeguards,” Chester said. Accepting donations from these “privacy-killing companies enable them to influence decisions by nonprofits, even subtly,” he said.

College Students Say Ditching Their Smartphones For a Week Changed Their Lives

It was part of a college course intended to break the powerful addiction of smartphones… an Adelphi University course called “Life Unplugged” where students did the unthinkable one week ago — handed over their smartphones. “I’m freaking out, I could probably cry right now,” one student said. It was a bold experiment to recognize today’s compulsive relationships with ever present devices. Seven days later, “who’s excited they’re getting their phones back today?” Professor Donna Freitas asked.

Gone were the nerves and the shakes. “Everything is perfect right now. I’m having a lot better relationships… it’s a stress free environment no pressure about social media,” Jacob Dannenberg said.

“I think it’s really refreshing and relaxing… I was able to fall asleep a lot easier,” student Adrianna Cigliano.

They managed to find their way, even without GPS for a week. “I just had to take the same route everywhere,” one student joked. They were also more productive. “Doing homework was 100 percent easier. I got it done faster, I was in the zone,” Cigliano said.

Prof. Freitas says it’s important for everyone to assess their addiction. “Are the conveniences worth it because the drawback are pretty significant,” Freitas said. “The face that no one can focus, that my students can’t sleep… They feel bad about themselves because of social media, the list goes on and on.”

Nearly Half of Parents Worry Their Child Is Addicted To Mobile Devices, Study Finds

According to a new survey from Common Sense Media and SurveyMonkey, 47% of parents worry their child is addicted to their mobile device. By comparison, only 32% of parents say they’re addicted themselves. USA Today reports: Half of parents also say they are at least somewhat concerned about how mobile devices will affect their kids’ mental health. Nearly one in five say they’re “extremely” or “very” concerned. According to the survey, 89% of parents believe it’s up to them to curb their children’s smartphone usage. The survey conducted between Jan. 25 and Jan. 29 included a sample of 4,201 adults, including 1,024 parents with children under age 18. Data was weighted to reflect the demographic composition of the U.S. for adults over 18, based on Census data. Many devices and services feature parental controls, but some parents may not be aware they exist. The Common Sense-SurveyMonkey survey found 22% of parents did not know YouTube — which has faced scrutiny over how easy it is for kids to find inappropriate videos — offered parental controls. Also, 37% have not used the controls before. Among parents surveyed who say their kids watch YouTube videos, 62% said their kids have seen inappropriate videos on the site. Most, or 81%, said it’s the parents’ job to prevent kids from seeing these videos.

NYPD Kept an Illegal Database of Juvenile Fingerprints For Years

For years, the New York Police Department illegally maintained a database containing the fingerprints of thousands of children charged as juvenile delinquents–in direct violation of state law mandating that police destroy these records after turning them over to the state’s Division of Criminal Justice Services. When lawyers representing some of those youths discovered the violation, the police department dragged its feet, at first denying but eventually admitting that it was retaining prints it was supposed to have destroyed. Since 2015, attorneys with the Legal Aid Society, which represents the majority of youths charged in New York City family courts, had been locked in a battle with the police department over retention of the fingerprint records of children under the age of 16. The NYPD did not answer questions from The Intercept about its handling of the records, but according to Legal Aid, the police department confirmed to the organization last week that the database had been destroyed. To date, the department has made no public admission of wrongdoing, nor has it notified the thousands of people it impacted, although it has changed its fingerprint retention practices following Legal Aid’s probing. “The NYPD can confirm that the department destroys juvenile delinquent fingerprints after the prints have been transmitted to DCJS,” a police spokesperson wrote in a statement to The Intercept.

Still, the way the department handled the process–resisting transparency and stalling even after being threatened with legal action–raises concerns about how police handle a growing number of databases of personal information, including DNA and data obtained through facial recognition technology. As The Intercept has reported extensively, the NYPD also maintains a secretive and controversial “gang database,” which labels thousands of unsuspecting New Yorkers–almost all black or Latino youth–as “gang members” based on a set of broad and arbitrary criteria. The fact that police were able to violate the law around juvenile fingerprints for years without consequence underscores the need for greater transparency and accountability, which critics say can only come from independent oversight of the department.

It’s unclear how long the NYPD was illegally retaining these fingerprints, but the report says the state has been using the Automated Fingerprint Identification System since 1989, “and laws protecting juvenile delinquent records have been in place since at least 1977.” Legal Aid lawyers estimate that tens of thousands of juveniles could have had their fingerprints illegally retained by police.

Facebook Says Government Demands For User Data Are at a Record High

Government demands for user data increased by 16% to 128,617 demands during the first-half of this year compared to the second-half of last year. That’s the highest number of government demands its received in any reporting period since it published its first transparency report in 2013. The U.S. government led the way with the most number of requests–50,741 demands for user data resulting in some account or user data given to authorities in 88% of cases. Facebook said two-thirds of all of the U.S. government’s requests came with a gag order, preventing the company from telling the user about the request for their data. But Facebook said it was able to release details of 11 so-called national security letters (NSLs) for the first time after their gag provisions were lifted during the period. National security letters can compel companies to turn over non-content data at the request of the FBI. These letters are not approved by a judge, and often come with a gag order preventing their disclosure. But since the Freedom Act passed in 2015, companies have been allowed to request the lifting of those gag orders.

Next in Google’s Quest for Consumer Dominance–Banking

The project, code-named Cache, is expected to launch next year with accounts run by Citigroup and a credit union at Stanford University, a tiny lender in Google’s backyard. Big tech companies see financial services as a way to get closer to users and glean valuable data. Apple introduced a credit card this summer. Amazon.com has talked to banks about offering checking accounts. Facebook is working on a digital currency it hopes will upend global payments. Their ambitions could challenge incumbent financial-services firms, which fear losing their primacy and customers. They are also likely to stoke a reaction in Washington, where regulators are already investigating whether large technology companies have too much clout.

The tie-ups between banking and technology have sometimes been fraught. Apple irked its credit-card partner, Goldman Sachs Group, by running ads that said the card was “designed by Apple, not a bank.” Major financial companies dropped out of Facebook’s crypto project after a regulatory backlash. Google’s approach seems designed to make allies, rather than enemies, in both camps. The financial institutions’ brands, not Google’s, will be front-and-center on the accounts, an executive told The Wall Street Journal. And Google will leave the financial plumbing and compliance to the banks — activities it couldn’t do without a license anyway.

Facebook Bug Has Camera Activated While People Are Using the App

When you’re scrolling through Facebook’s app, the social network could be watching you back, in more ways than just your data, concerned users have found. Multiple people have found and reported that their iPhone cameras were turned on in the background while looking at their feed. The issue came to light with several posts on Twitter, showing that their cameras were activated behind Facebook’s app as they were watching videos or looking at photos on the social network. After clicking on the video to full screen, returning it back to normal would create a bug where Facebook’s mobile layout was slightly shifted to the right. With the open space on the left, you could now see the phone’s camera activated in the background. This was documented in multiple cases, with the earliest incident on November 2.

Google’s Secret ‘Project Nightingale’ Gathers Personal Health Data on Millions of Americans

Google is teaming with one of the country’s largest health-care systems on a secret project to collect and crunch the detailed personal health information of millions of Americans across 21 states, WSJ reported Monday, citing people familiar with the matter and internal documents.

The initiative, code-named “Project Nightingale,” appears to be the largest in a series of efforts by Silicon Valley giants to gain access to personal health data and establish a toehold in the massive health-care industry. Amazon.com, Apple and Microsoft are also aggressively pushing into health care, though they haven’t yet struck deals of this scope. Google launched the effort last year with St. Louis-based Ascension, the country’s second-largest health system. The data involved in Project Nightingale includes lab results, doctor diagnoses and hospitalization records, among other categories, and amounts to a complete health history, complete with patient names and dates of birth.

Neither patients nor doctors have been notified. At least 150 Google employees already have access to much of the data on tens of millions of patients, according to a person familiar with the matter and the documents.

Google in this case is using the data in part to design new software, underpinned by advanced artificial intelligence and machine learning.

Google appears to be sharing information within Project Nightingale more broadly than in its other forays into health-care data. In September, Google announced a 10-year deal with the Mayo Clinic to store the hospital system’s genetic, medical and financial records.

Google co-founder Larry Page, in a 2014 interview, suggested that patients worried about the privacy of their medical records were too cautious. Mr. Page said: “We’re not really thinking about the tremendous good that can come from people sharing information with the right people in the right ways.”

Study of Over 11,000 Online Stores Finds ‘Dark Patterns’ on 1,254 sites

A large-scale academic study that analyzed more than 53,000 product pages on more than 11,000 online stores found widespread use of user interface “dark patterns” — practices meant to mislead customers into making purchases based on false or misleading information.

The study — presented last week at the ACM CSCW 2019 conference — found 1,818 instances of dark patterns present on 1,254 of the ~11K shopping websites (~11.1%) researchers scanned. “Shopping websites that were more popular, according to Alexa rankings, were more likely to feature dark patterns,” researchers said. But while the vast majority of UI dark patterns were meant to trick users into subscribing to newsletters or allowing broad data collection, some dark patterns were downright foul, trying to mislead users into making additional purchases, either by sneaking products into shopping carts or tricking users into believing products were about to sell out. Of these, the research team found 234 instances, deployed across 183 websites.

Digital Authoritarianism Is On the Rise Around the World, Report Warns

Internet freedom declined for a ninth consecutive year as governments around the world used social media to monitor citizens and manipulate elections, according to a new study that warned of creeping “digital authoritarianism.” Thirty-three of the 65 countries surveyed were found to have experienced worsening internet freedom since June 2018, compared with 16 that were found to have improving conditions. The study, conducted by Freedom House, a nonprofit human rights advocacy, said domestic disinformation had grown as a threat to democracy with populist leaders and their online supporters using the internet to distort political discussions. The organization found domestic interference in 26 of the 30 countries that held elections over the past year.

The report said internet freedom in the U.S. had declined, in large part because law enforcement and immigration agencies used social media to monitor people, though the country was still deemed “free.” China was dubbed the “worst abuser of internet freedom” for a fourth consecutive year as the government tightened information controls because of the 30th anniversary of the Tiananmen Square massacre and protests in Hong Kong. Noting that the biggest platforms were American, Freedom House called on the U.S. to lead in the effort to fix social media transparency and accountability. “This is the only way to stop the internet from becoming a Trojan horse for tyranny and oppression,” wrote Adrian Shahbaz, one of the authors of the report.

Leaked Documents Show Facebook Leveraged User Data To Help Friends and Fight Rivals

A cache of leaked Facebook documents shows how the company’s CEO, Mark Zuckerberg, oversaw plans to consolidate the social network’s power and control competitors by treating its users’ data as a bargaining chip. This trove comprises approximately 7,000 pages in total, of which about 4,000 are internal Facebook communications such as emails, web chats, notes, presentations and spreadsheets, primarily from 2011 to 2015. About 1,200 pages are marked as “highly confidential.” Taken together, they show how Zuckerberg, along with his board and management team, found ways to tap Facebook users’ data — including information about friends, relationships and photos — as leverage over the companies it partnered with. In some cases, Facebook would reward partners by giving them preferential access to certain types of user data while denying the same access to rival companies.

For example, Facebook gave Amazon special access to user data because it was spending money on Facebook advertising. In another case the messaging app MessageMe was cut off from access to data because it had grown too popular and could compete with Facebook. All the while, Facebook planned to publicly frame these moves as a way to protect user privacy, the documents show. State and federal authorities are now closely scrutinizing Facebook’s business practices. In October, New York Attorney General Letitia James announced that 47 attorneys general from states and U.S. territories plan to take part in a New York-led antitrust probe into Facebook. Over the summer, the House Judiciary Committee held hearings over antitrust concerns in Silicon Valley while the Federal Trade Commission also continues to examine the firm’s practices.

United States’ Department of Homeland Security Will Soon Have Biometric Data On Nearly 260 Million People

The U.S. Department of Homeland Security (DHS) expects to have face, fingerprint, and iris scans of at least 259 million people in its biometrics database by 2022, according to a recent presentation from the agency’s Office of Procurement Operations reviewed by Quartz. That’s about 40 million more than the agency’s 2017 projections, which estimated 220 million unique identities by 2022, according to previous figures cited by the Electronic Frontier Foundation (EFF), a San Francisco-based privacy rights nonprofit.

A slide deck, shared with attendees at an Oct. 30 DHS industry day, includes a breakdown of what its systems currently contain, as well as an estimate of what the next few years will bring. The agency is transitioning from a legacy system called IDENT to a cloud-based system (hosted by Amazon Web Services) known as Homeland Advanced Recognition Technology, or HART. The biometrics collection maintained by DHS is the world’s second-largest, behind only India’s countrywide biometric ID network in size. The traveler data kept by DHS is shared with other U.S. agencies, state and local law enforcement, as well as foreign governments.

Mozilla Hits Google, Facebook For ‘Microtargeting’ Political Ads

Microtargeting, a method which uses consumer data and demographics to narrowly segment audiences, is used by political campaigns to specialize ads for different voting groups. The practice’s critics include Federal Election Commission Chairwoman Ellen Weintraub, who wrote in a Washington Post op-ed that microtargeting makes it “easy to single out susceptible groups and direct political misinformation to them with little accountability, because the public at large never sees the ad.” Mozilla’s call follows reports that Facebook has considered restricting politicians’ access to microtargeting.

I worked on political ads at Facebook. They profit by manipulating us.

Yael Eisenstat, a visiting fellow at Cornell Tech in the Digital Life Initiative and a former elections integrity head at Facebook, CIA officer, and White House adviser, writes for the Washington Post:

As the company continues to struggle with how to handle political content and as another presidential election approaches, it’s clear that tinkering around the margins of advertising policies won’t fix the most serious issues. The real problem is that Facebook profits partly by amplifying lies and selling dangerous targeting tools that allow political operatives to engage in a new level of information warfare. Its business model exploits our data to let advertisers custom-target people, show us each a different version of the truth and manipulate us with hyper-customized ads — ads that, as of two weeks ago, can contain blatantly false and debunked information if they’re run by a political campaign. As long as Facebook prioritizes profit over healthy discourse, they can’t avoid damaging democracies.

Early in my time there, I dug into the question of misinformation in political advertising. Posting in a “tribe” (Facebook’s internal collaboration platform), I asked our teams working on political advertising whether we should incorporate the same tools for political ads that other integrity teams at Facebook were developing to address misinformation in pages and organic posts. It was unclear to me why the company was applying different, siloed policies and tools across the platform. Most users do not differentiate organic content from ads — as I clearly saw on a trip to India, where we were testing our ads-integrity products — so why were we expecting users to understand that we applied different standards to different forms of content that all just appear in their news feeds?

The fact that we were taking money for political ads and allowing campaigns and other political organizations to target users based on the vast amounts of data we had gathered meant political ads should have an even higher bar for integrity than what people were posting in organic content. We verified advertisers to run political ads, giving them a check mark and a “paid for by” label, and I questioned if that gave the false impression that we were vouching for the validity of the content, boosting its perceived credibility even though we weren’t checking any facts or ensuring that ads weren’t spreading false information. Most of my colleagues agreed. People wanted to get this right. But above me, there was no appetite for my pushing, and I was accused of “creating confusion.”

With a Laser, Researchers Say They Can Hack Alexa, Google Home or Siri

Researchers in Japan and at the University of Michigan said Monday that they have found a way to take over Google Home, Amazon’s Alexa or Apple’s Siri devices from hundreds of feet away by shining laser pointers, and even flashlights, at the devices’ microphones.

In one case, they said, they opened a garage door by shining a laser beam at a voice assistant that was connected to it. They also climbed 140 feet to the top of a bell tower at the University of Michigan and successfully controlled a Google Home device on the fourth floor of an office building 230 feet away. And by focusing their lasers using a telephoto lens, they said, they were able to hijack a voice assistant more than 350 feet away. Opening the garage door was easy, the researchers said. With the light commands, the researchers could have hijacked any digital smart systems attached to the voice-controlled assistants.

They said they could have easily switched light switches on and off, made online purchases or opened a front door protected by a smart lock. They even could have remotely unlocked or started a car that was connected to the device. The researchers, who studied the light flaw for seven months, said they had discovered that the microphones in the devices would respond to light as if it were sound. Inside each microphone is a small plate called a diaphragm that moves when sound hits it. That movement can be replicated by focusing a laser or a flashlight at the diaphragm, which converts it into electric signals, they said. The rest of the system then responds the way it would to sound.

While the researchers said they had notified several companies to the light vulnerability, most microphones would need to be redesigned to remedy the problem. And simply covering the microphone with a piece of tape wouldn’t solve it.

The findings of the vulnerability can be found here.

The Drone Wars Are Already Here

The skies of Syria, Yemen, and Libya swarm with armed and dangerous unmanned aerial vehicles. And the technology is spreading farther and farther afield. Three decades ago, drones were available to only the most technologically developed state military organizations. Today they’re everywhere, being used by weaker states and small military forces, as well as many non-state actors, including Islamic State and al-Qaeda. “We’re seeing a cycle of technological innovation regarding the use of drones and associated systems, and that cycle of techno-tactical adaptation and counter-adaptation will only hasten going forward,” says Raphael Marcus, a research fellow in the department of war studies at King’s College London.

The diffusion of such technology is leveling the playing field, says Marcus, author of Israel’s Long War With Hezbollah: Military Innovation and Adaptation Under Fire. He says that because armies no longer have the monopoly on the use of drones, surveillance technology, precision capabilities, and long-range missiles, other actors in the region are able to impose their will on the international stage. “The parameters have changed,” he says. That’s already leading to greater instability. For example, Hezbollah’s thwarted drone strike in August and increasingly sophisticated and more frequent drone attacks by Hamas raise the risk of another war with Israel; meanwhile, Yemen’s Houthi rebels made an impact on the global price of oil with a strike on Saudi Arabia, using 25 drones and missiles.

The Institute for Strategic Research in Paris has already recommended that NATO and European militaries ready themselves for drone threats in future conflicts. The institute also urged countries to cooperate on a joint research and development strategy to defend against the threat in a report issued in September.

Does that mean that future wars will be automated? “Drones will definitely be taking more important roles in the next few years, but they aren’t about to replace soldiers,” says Ben Nassi, a researcher at Ben-Gurion University in Israel. For that to happen, he says, drones will need longer battery life and the development of a centralized computer command-and-control server that will allow a single person to control a swarm of drones, similar to how individual players manage their militaries in a computer game.

Spain and GitHub Are Blocking an App That Helped Protesters Organize

For the last month, hundreds of thousands of people have joined demonstrations in Spain to voice their objection to the jailing of Catalan separatist leaders and support Catalonian independence. As with almost all modern activist and public protest movements, activists are using social media and apps to communicate with and organize public actions. But this week, in a move that puts the Spanish government on par with censorship-heavy places like China and Russia, the country requested that Github block access to one of those apps, by revoking local access to its Github repository. Github, which is owned by Microsoft, complied with the order.

According to Spanish news outlet El Confidencial, last week the government ordered takedowns of websites and app made by Tsunami Democratic, an activist group organizing protests in the region. To try to keep access to the app download alive, Tsunami Democratic moved the .apk file to Github. But the government shut that down, too, blocking the site in Spain. Motherboard tested the download using a VPN, and the Github repo was blocked from Madrid.