Resources

Proposed Toronto development from Google’s Sidewalk Labs sparks concerns over data

Heated streets will melt ice and snow on contact. Sensors will monitor traffic and protect pedestrians. Driverless shuttles will carry people to their doors.

A unit of Google’s parent company Alphabet is proposing to turn a rundown part of Toronto’s waterfront into what may be the most wired community in history — to “fundamentally refine what urban life can be.”

Dan Doctoroff, the CEO of Sidewalk Labs, envisions features like pavement that lights up to warn pedestrians of approaching streetcars. Flexible heated enclosures — described as “raincoats” for buildings — will be deployed based on weather data during Toronto’s bitter winters. Robotic waste-sorting systems will detect when a garbage bin is full and remove it before raccoons descend.

“Those are great uses of data that can improve the quality of life of people,′ he said. “That’s what we want to do.”

But some Canadians are rethinking the privacy implications of giving one of the most data-hungry companies on the planet the means to wire up everything from street lights to pavement.

The concerns have intensified following a series of privacy scandals at Facebook and Google. A recent Associated Press investigation found that many Google services on iPhones and Android devices store location-tracking data even if you use privacy settings that are supposed to turn them off.

Adam Vaughan, the federal lawmaker whose district includes the development, said debate about big data and urban infrastructure is coming to cities across the world and he would rather have Toronto at the forefront of discussion.

“Google is ahead of governments globally and locally. That’s a cause for concern but it’s also an opportunity,” Vaughan said.

675

The Internet is not ethereal, it uses a lot of energy, resources and materials

Every website and product connected to the internet would not be able to exist without a vast network of wireless routers, fiber optic cables running underground and underwater, and data centers that house the servers which bring the internet to life. Data centers in the U.S. alone eat up 70 billion kilowatts of energy per year, according to a 2016 estimate from the Department of Energy — that’s 1.8 percent of all energy use across the country.

The internet is not ethereal, and a new project from the blog Low-Tech Magazine aims to make that issue more tangible. Low-Tech Magazine — a blog operated by Kris De Decker that has run on WordPress since 2007 — launched a “Low-Tech,” solar version of the site that’s designed from the ground-up to use as little energy as possible. In a Skype call with Motherboard, De Decker said that he doesn’t think people don’t care about how much energy it takes they use the internet, they just don’t understand the extent of the problem. “There’s this idea that the internet is immaterial, it’s somewhere floating in clouds,” he said. “Of course, it’s a very material thing that uses resources, materials, energy — and quite a lot actually.”

593

Mobile Websites Can Tap Into Your Phone’s Sensors Without Asking

When apps wants to access data from your smartphone’s motion or light sensors, they often make that capability clear. That keeps a fitness app, say, from counting your steps without your knowledge. But a team of researchers has discovered that the rules don’t apply to websites loaded in mobile browsers, which can often often access an array of device sensors without any notifications or permissions whatsoever.

That mobile browsers offer developers access to sensors isn’t necessarily problematic on its own. It’s what helps those services automatically adjust their layout, for example, when you switch your phone’s orientation. And the World Wide Web Consortium standards body has codified how web applications can access sensor data. But the researchers—Anupam Das of North Carolina State University, Gunes Acar of Princeton University, Nikita Borisov of the University of Illinois at Urbana-Champaign, and Amogh Pradeep of Northeastern University—found that the standards allow for unfettered access to certain sensors. And sites are using it.

The researchers found that of the top 100,000 sites—as ranked by Amazon-owned analytics company Alexa—3,695 incorporate scripts that tap into one or more of these accessible mobile sensors. That includes plenty of big names, including Wayfair, Priceline.com, and Kayak.

“If you use Google Maps in a mobile browser you’ll get a little popup that says, ‘This website wants to see your location,’ and you can authorize that,” says Borisov. “But with motion, lighting, and proximity sensors there isn’t any mechanism to notify the user and ask for permission, so they’re being accessed and that is invisible to the user. For this collection of sensors there isn’t a permissions infrastructure.”

That unapproved access to motion, orientation, proximity, or light sensor data alone probably wouldn’t compromise a user’s identity or device. And a web page can only access sensors as long as a user is actively browsing the page, not in the background. But the researchers note that on a malicious website, the information could fuel various types of attacks, like using ambient light data to make inferences about a user’s browsing, or using motion sensor data as a sort of keylogger to deduce things like PIN numbers.

In past work, researchers have also shown that they can use the unique calibration features of motion sensors on individual devices to identify and track them across websites. And while the World Wide Web Consortium standards classify data from these sensors as “not sensitive enough to warrant specific sensor permission grants,” the group does acknowledge that there are some potential privacy concerns. “Implementations may consider permissions or visual indicators to signify the use of sensors by the page,” the standard suggests.

The prevalence of ad networks also makes it difficult to get a handle on the issue. The researchers even found three scripts attempting to access user sensors in ad modules on WIRED.com, though at least one had been removed when the researchers rechecked the site for this story. Other media sites, including CNN, the Los Angeles Times, and CNET have ad networks using similar scripts as well.

614

Using Wi-Fi To Count People Through Walls

Whether you’re trying to figure out how many students are attending your lectures or how many evil aliens have taken your Space Force brethren hostage, Wi-Fi can now be used to count them all. The system, created by researchers at UC Santa Barbara, uses a single Wi-Fi router outside of the room to measure attenuation and signal drops. From the release: “The transmitter sends a wireless signal whose received signal strength (RSSI) is measured by the receiver. Using only such received signal power measurements, the receiver estimates how many people are inside the room — an estimate that closely matches the actual number. It is noteworthy that the researchers do not do any prior measurements or calibration in the area of interest; their approach has only a very short calibration phase that need not be done in the same area.” This means that you could simply walk up to a wall and press a button to count, with a high degree of accuracy, how many people are walking around. The system can measure up to 20 people in its current form.

585

Facebook Is Giving Advertisers Access To Your Shadow Contact Information

Kashmir Hill, reporting for Gizmodo:

Last week, I ran an ad on Facebook targeted at a computer science professor named Alan Mislove. Mislove studies how privacy works on social networks and had a theory that Facebook is letting advertisers reach users with contact information collected in surprising ways. I was helping him test the theory by targeting him in a way Facebook had previously told me wouldn’t work. I directed the ad to display to a Facebook account connected to the landline number for Alan Mislove’s office, a number Mislove has never provided to Facebook. He saw the ad within hours.

One of the many ways that ads get in front of your eyeballs on Facebook and Instagram is that the social networking giant lets an advertiser upload a list of phone numbers or email addresses it has on file; it will then put an ad in front of accounts associated with that contact information. A clothing retailer can put an ad for a dress in the Instagram feeds of women who have purchased from them before, a politician can place Facebook ads in front of anyone on his mailing list, or a casino can offer deals to the email addresses of people suspected of having a gambling addiction. Facebook calls this a “custom audience.” You might assume that you could go to your Facebook profile and look at your “contact and basic info” page to see what email addresses and phone numbers are associated with your account, and thus what advertisers can use to target you. But as is so often the case with this highly efficient data-miner posing as a way to keep in contact with your friends, it’s going about it in a less transparent and more invasive way.

… Giridhari Venkatadri, Piotr Sapiezynski, and Alan Mislove of Northeastern University, along with Elena Lucherini of Princeton University, did a series of tests that involved handing contact information over to Facebook for a group of test accounts in different ways and then seeing whether that information could be used by an advertiser. They came up with a novel way to detect whether that information became available to advertisers by looking at the stats provided by Facebook about the size of an audience after contact information is uploaded. They go into this in greater length and technical detail in their paper [PDF]. They found that when a user gives Facebook a phone number for two-factor authentication or in order to receive alerts about new log-ins to a user’s account, that phone number became targetable by an advertiser within a couple of weeks.

Officially, Facebook denies the existence of shadow profiles. In a hearing with the House Energy & Commerce Committee earlier this year, when New Mexico Representative Ben Lujan asked Facebook CEO Mark Zuckerberg if he was aware of the so-called practice of building “shadow profiles”, Zuckerberg denied knowledge of it.

625

We hold people with power to account. Why not algorithms?

Robert Jones was driving home through the pretty town of Todmorden, in West Yorkshire, when he noticed the fuel light flashing on the dashboard of his car. He had just a few miles to find a petrol station, which was cutting things rather fine, but thankfully his GPS seemed to have found a short cut – sending him on a narrow winding path up the side of the valley.

Robert followed the machine’s instructions, but as he drove, the road got steeper and narrower. After a couple of miles, it turned into a dirt track, but Robert wasn’t fazed. After all, he thought, he had “no reason not to trust the satnav”.

Just a short while later, anyone who happened to be looking up from the valley below would have seen the nose of Robert’s BMW appearing over the brink of the cliff above, saved from the 100ft drop only by the flimsy wooden fence at the edge he had just crashed into. “It kept insisting the path was a road,” he told the Halifax Courier after the incident. “So I just trusted it. You don’t expect to be taken nearly over a cliff.”

I can imagine Robert was left red-faced by his blunder, but in a way, I think he’s in good company. When it comes to placing blind faith in a piece of software, his mistake was one we’re almost all capable of making. In our urge to automate, in our eagerness to adopt the latest innovations, we appear to have developed a habit of unthinkingly handing over power to machines.

All around us, algorithms provide a kind of convenient source of authority: an easy way to delegate responsibility, a short cut we take without thinking. Who is really going to click through to the second page of Google results every time and think critically about the information that has been served up? Or go to every airline to check if a comparison site is listing the cheapest deals? Or get out a ruler and a road map to confirm that their GPS is offering the shortest route?

But already in our hospitals, our schools, our shops, our courtrooms and our police stations, artificial intelligence is silently working behind the scenes, feeding on our data and making decisions on our behalf. Sure, this technology has the capacity for enormous social good – it can help us diagnose breast cancer, catch serial killers, avoid plane crashes and, as the health secretary, Matt Hancock, has proposed, potentially save lives using NHS data and genomics. Unless we know when to trust our own instincts over the output of a piece of software, however, it also brings the potential for disruption, injustice and unfairness.

If we permit flawed machines to make life-changing decisions on our behalf – by allowing them to pinpoint a murder suspect, to diagnose a condition or take over the wheel of a car – we have to think carefully about what happens when things go wrong.

Back in 2012, a group of 16 Idaho residents with disabilities received some unexpected bad news. The Department of Health and Welfare had just invested in a “budget tool” – a swish piece of software, built by a private company, that automatically calculated their entitlement to state support. It had declared that their care budgets should be slashed by several thousand dollars each, a decision that would put them at serious risk of being institutionalised.

The problem was that the budget tool’s logic didn’t seem to make much sense. While this particular group of people had deep cuts to their allowance, others in a similar position actually had their benefits increased by the machine. As far as anyone could tell from the outside, the computer was essentially plucking numbers out of thin air.

From the inside, this wasn’t far from the truth. It would take four years and a class-action lawsuit to get to the bottom of what was going on, but when the budget tool was eventually handed over for scrutiny, a hint of what we all have to fear from the unrestricted power of machine decision-making was revealed.

The algorithm was junk. The data was riddled with errors. The calculations were so bad that the court would eventually rule its determinations unconstitutional. It had, effectively, been awarding benefits at random. And yet, when dressed up as a slick and glossy new computer programme, the algorithm brought with it an air of authority that was difficult to argue against.

In the days before proper regulation of medicines, you used to be able to bottle up any coloured liquid and make a small fortune selling it as a cure-all. Today, in the still largely unfettered world of AI and algorithms, we’re seeing people make bold, unsubstantiated and sometimes irresponsible claims about their inventions.

There’s only one way to prevent this. I think it’s time we started treating machines as we would any other source of power. I would like to propose a system of regulation for algorithms, and perhaps a good place to start would be with Tony Benn’s five simple questions, designed for powerful people, but equally applicable to modern AI:

“What power have you got?
“Where did you get it from?
“In whose interests do you use it?
“To whom are you accountable?
“How do we get rid of you?”

Because, ultimately, we can’t just think of algorithms in isolation. We have to think of the failings of the people who design them – and the danger to those they are supposedly designed to serve.

686

A Look at the Dark Side of the Lives of Some Prominent YouTubers, Who Are Increasingly Saying They’re Stressed, Depressed, Lonely, and Exhausted

Many YouTubers are finding themselves stressed, lonely and exhausted. For years, YouTubers have believed that they are loved most by their audience when they project a chirpy, grateful image. But what happens when the mask slips? This year there has been a wave of videos by prominent YouTubers talking about their burnout, chronic fatigue and depression. “This is all I ever wanted,” said Elle Mills, a 20-year-old Filipino-Canadian YouTuber in a (monetised) video entitled Burnt Out At 19, posted in May. “And why the fuck am I so unfucking unhappy? It doesn’t make any sense. You know what I mean? Because, like, this is literally my fucking dream. And I’m fucking so un-fucking-happy.”

… The anxieties are tied up with the relentless nature of their work. Tyler Blevins, AKA Ninja, makes an estimated $500,000 every month via live broadcasts of him playing the video game Fortnite on Twitch, a service for livestreaming video games that is owned by Amazon. Most of Blevins’ revenue comes from Twitch subscribers or viewers who provide one-off donations (often in the hope that he will thank them by name “on air”). Blevins recently took to Twitter to complain that he didn’t feel he could stop streaming. “Wanna know the struggles of streaming over other jobs?” he wrote, perhaps ill-advisedly for someone with such a stratospheric income. “I left for less than 48 hours and lost 40,000 subscribers on Twitch. I’ll be back today… grinding again.” There was little sympathy on Twitter for the millionaire. But the pressure he described is felt at every level of success, from the titans of the content landscape all the way down to the people with channels with just a few thousand subscribers, all of whom feel they must be constantly creating, always available and responding to their fans.

At the end of the month he was pale, gaunt and tired in a way that, he recalls, seemed “impervious to rest”. His work, he noticed, had become increasingly rushed and harsh in tone. Yet the angry, provocative quality of his videos seemed only to be making them more popular. “Divisive content is the king of online media today, and YouTube heavily boosts anything that riles people up,” he says. “It’s one of the most toxic things: the point at which you’re breaking down is the point at which the algorithm loves you the most.”

“Constant releases build audience loyalty,” says Austin Hourigan, who runs ShoddyCast, a YouTube channel with 1.2 million subscribers. “The more loyalty you build, the more likely your viewers are to come back, which gives you the closest thing to a financial safety net in what is otherwise a capricious space.” When a YouTuber passes the 1 million subscribers mark, they are presented with a gold plaque to mark the event. Many of these plaques can be seen on shelves and walls in the background of presenters’ rooms. In this way, the size of viewership and quantity of uploads become the main markers of value.

672

Teens Would Rather Text Their Friends Than Talk To Them In Person, Poll Shows

A new poll of 1,141 teenagers shows that teenagers prefer to text their friends than talk in person. The findings come from Common Sense Media’s 2018 Social Media, Social Life survey. Fortune reports:
Only 15% of teens said Facebook was their main social media site, down from 68% in 2012. Snapchat is now the main site for 41% of teenagers, followed by Instagram at 22%. In addition, this year’s survey saw texting (35%) surpass in-person (32%) as teens’ favorite way to communicate with friends. In 2012, 49% preferred to communicate in person, versus 33% who preferred texting.

[M]ore teens said that social media had a positive effect on their levels of loneliness, depression, and anxiety than those who said it had a negative one, but it seems to have the opposite effect on teens who score low on the authors’ social-emotional well-being scale. Of those, 70% said they sometimes feel left out when using social media, 43% feel bad if no one likes or comments on their posts, and 35% said they had been cyberbullied. They were also more likely to say that social media was “extremely” or “every” important, compared to their peers who score high on the scale.

632

GCHQ mass surveillance violated human rights, court rules

GCHQ’s methods in carrying out bulk interception of online communications violated privacy and failed to provide sufficient surveillance safeguards, the European court of human rights (ECHR) has ruled in a test case judgment.

But the court found that GCHQ’s regime for sharing sensitive digital intelligence with foreign governments was not illegal.

It is the first major challenge to the legality of UK intelligence agencies intercepting private communications in bulk, following Edward Snowden’s whistleblowing revelations. The long-awaited ruling is one of the most comprehensive assessments by the ECHR of the legality of the interception operations operated by UK intelligence agencies.

The case was brought by a coalition of 14 human rights groups, privacy organisations and journalists, including Amnesty International, Liberty, Privacy International and Big Brother Watch. In a statement, published on Amnesty’s website, Lucy Claridge, Amnesty International’s Strategic Litigation Director, said, today’s ruling “represents a significant step forward in the protection of privacy and freedom of expression worldwide. It sends a strong message to the UK Government that its use of extensive surveillance powers is abusive and runs against the very principles that it claims to be defending.” He added: “This is particularly important because of the threat that Government surveillance poses to those who work in human rights and investigative journalism, people who often risk their own lives to speak out. Three years ago, this same case forced the UK Government to admit GCHQ had been spying on Amnesty — a clear sign that our work and the people we work alongside had been put at risk.”

The judges considered three aspects of digital surveillance: bulk interception of communications, intelligence sharing and obtaining of communications data from communications service providers. By a majority of five to two votes, the Strasbourg judges found that GCHQ’s bulk interception regime violated article 8 of the European convention on human rights, which guarantees privacy, because there were said to be insufficient safeguards, and rules governing the selection of “related communications data” were deemed to be inadequate.

656

Bitcoin Mining Now Accounts For Almost One Percent of the World’s Energy Consumption

It is well-established established that Bitcoin mining — aka, donating one’s computing power to keep a cryptocurrency network up and running in exchange for a chance to win some free crypto — uses a lot of electricity. Companies involved in large-scale mining operations know that this is a problem, and they’ve tried to employ various solutions for making the process more energy efficient.

But, according to testimony provided by Princeton computer scientist Arvind Narayanan to the Senate Committee on Energy and Natural Resources, no matter what you do to make cryptocurrency mining harware greener, it’s a drop in the bucket compared to the overall network’s flabbergasting energy consumption. Instead, Narayanan told the committee, the only thing that really determines how much energy Bitcoin uses is its price. “If the price of a cryptocurrency goes up, more energy will be used in mining it; if it goes down, less energy will be used,” he told the committee. “Little else matters. In particular, the increasing energy efficiency of mining hardware has essentially no impact on energy consumption.”

In his testimony, Narayanan estimates that Bitcoin mining now uses about five gigawatts of electricity per day (in May, estimates of Bitcoin power consumption were about half of that). He adds that when you’ve got a computer racing with all its might to earn a free Bitcoin, it’s going to be running hot as hell, which means you’re probably using even more electricity to keep the computer cool so it doesn’t die and/or burn down your entire mining center, which probably makes the overall cost associated with mining even higher.

714

India’s Biometric Database Is Creating A Perfect Surveillance State — And U.S. Tech Companies Are On Board

Big U.S. technology companies are involved in the construction of one of the most intrusive citizen surveillance programs in history. For the past nine years, India has been building the world’s biggest biometric database by collecting the fingerprints, iris scans and photos of nearly 1.3 billion people. For U.S. tech companies like Microsoft, Amazon and Facebook, the project, called Aadhaar (which means “proof” or “basis” in Hindi), could be a gold mine. The CEO of Microsoft has repeatedly praised the project, and local media have carried frequent reports on consultations between the Indian government and senior executives from companies like Apple and Google (in addition to South Korean-based Samsung) on how to make tech products Aadhaar-enabled. But when reporters of HuffPost and HuffPost India asked these companies in the past weeks to confirm they were integrating Aadhaar into their products, only one company — Google — gave a definitive response.

That’s because Aadhaar has become deeply controversial, and the subject of a major Supreme Court of India case that will decide the future of the program as early as this month. Launched nine years ago as a simple and revolutionary way to streamline access to welfare programs for India’s poor, the database has become Indians’ gateway to nearly any type of service — from food stamps to a passport or a cell phone connection. Practical errors in the system have caused millions of poor Indians to lose out on aid. And the exponential growth of the project has sparked concerns among security researchers and academics that India is the first step toward setting up a surveillance society to rival China.

685

Phone Numbers Were Never Meant as ID

One key lesson from the recent T-Mobile and several other breaches: our phone numbers, that serve as a means to identity and verify ourselves, are increasingly getting targeted, and the companies are neither showing an appetite to work on an alternative identity management system, nor are they introducing more safeguards to how phone numbers are handled and exchanged. From a report:
Identity management experts have warned for years about over-reliance on phone numbers. But the United States doesn’t offer any type of universal ID, which means private institutions and even the federal government itself have had to improvise. As cell phones proliferated, and phone numbers became more reliably attached to individuals long term, it was an obvious choice to start collecting those numbers even more consistently as a type of ID. But over time, SMS messages, biometric scanners, encrypted apps, and other special functions of smartphones have evolved into forms of authentication as well.

“The bottom line is society needs identifiers,” says Jeremy Grant, coordinator of the Better Identity Coalition, an industry collaboration that includes Visa, Bank of America, Aetna, and Symantec. “We just have to make sure that knowledge of an identifier can’t be used to somehow take over the authenticator. And a phone number is only an identifier; in most cases, it’s public.” Think of your usernames and passwords. The former are generally public knowledge; it’s how people know who you are. But you keep the latter guarded, because it’s how you prove who you are.

The use of phone numbers as both lock and key has led to the rise, in recent years, of so-called SIM swapping attacks, in which an attacker steals your phone number. When you add two-factor authentication to an account and receive your codes through SMS texts, they go to the attacker instead, along with any calls and texts intended for the victim. Sometimes attackers even use inside sources at carriers who will transfer numbers for them.

592

Police Bodycams Can Be Hacked To Doctor Footage, Install Malware

Josh Mitchell’s Defcon presentation analyzes the security of five popular brands of police bodycams (Vievu, Patrol Eyes, Fire Cam, Digital Ally, and CeeSc) and reveals that they are universally terrible. All the devices use predictable network addresses that can be used to remotely sense and identify the cameras when they switch on. None of the devices use code-signing. Some of the devices can form ad-hoc Wi-Fi networks to bridge in other devices, but they don’t authenticate these sign-ons, so you can just connect with a laptop and start raiding the network for accessible filesystems and gank or alter videos, or just drop malware on them.

683

Child Drownings In Germany Linked To Parents’ Obsession With Mobile Phones

The German Lifeguard Association (DLRG) has made a direct connection between children getting into difficulty in the water and parents being too busy on their mobile phones to notice. More than 300 people have drowned in Germany so far this year.

“Too few parents and grandparents are heeding the advice: when your children and grandchildren are in the water, put your smartphone away,” Achim Wiese, the DLRG’s spokesman, said. “We’re experiencing on a daily basis that people treat swimming pools like a kindergarten and simply don’t pay attention,” added Peter Harzheim of the German federation of swimming pool supervisors. “In the past, parents and grandparents spent more time with their children in the swimming pool. But increasing numbers of parents are fixated by their smartphones and are not looking left or right, let alone paying attention to their children,” he told German media. “It’s sad that parents behave so neglectfully these days.”

The organization also put some blame on the school system for not making swimming lessons required from an early age. “Budget cuts have also led to swimming pools shortening their opening times,” adds The Guardian.

649
Stare Into The Lights My Pretties

Children ‘at risk of robot influence’

Forget peer pressure, future generations are more likely to be influenced by robots, a study suggests.

The research, conducted at the University of Plymouth, found that while adults were not swayed by robots, children were.

The fact that children tended to trust robots without question raised ethical issues as the machines became more pervasive, said researchers.

They called for the robotics community to build in safeguards for children.

Those taking part in the study completed a simple test, known as the Asch paradigm, which involved finding two lines that matched in length.

Known as the conformity experiment, the test has historically found that people tend to agree with their peers even if individually they have given a different answer.

In this case, the peers were robots. When children aged seven to nine were alone in the room, they scored an average of 87% on the test. But when the robots joined them, their scores dropped to 75% on average. Of the wrong answers, 74% matched those of the robots.

“If robots can convince children (but not adults) that false information is true, the implication for the planned commercial exploitation of robots for childminding and teaching is problematic.”

698

Banks and Retailers Are Tracking How You Type, Swipe and Tap

When you’re browsing a website and the mouse cursor disappears, it might be a computer glitch — or it might be a deliberate test to find out who you are.

The way you press, scroll and type on a phone screen or keyboard can be as unique as your fingerprints or facial features. To fight fraud, a growing number of banks and merchants are tracking visitors’ physical movements as they use websites and apps.

The data collection is invisible to those being watched. Using sensors in your phone or code on websites, companies can gather thousands of data points, known as “behavioral biometrics.”
 


A phone’s touchscreen sensors can track where and how you swipe your device to help determine who you are.

 


The angle at which you hold your device is one of the many biometric markers that can be measured.

 

Behavioral monitoring software churns through thousands of elements to calculate a probability-based guess about whether a person is who they claim. Two major advances have fed its growing use: the availability of cheap computing power and the sophisticated array of sensors now built into most smartphones.

The system’s unobtrusiveness is part of its appeal, Mr. Hanley said. Traditional physical biometrics, like fingerprints or irises, require special scanning hardware for authentication. But behavioral traits can be captured in the background, without customers doing anything to sign up.

BioCatch occasionally tries to elicit a reaction. It can speed up the selection wheel you use to enter data like dates and times on your phone, or make your mouse cursor disappear for a fraction of a second.

“Everyone reacts a little differently to that,” said Frances Zelazny, BioCatch’s chief strategy and marketing officer. “Some people move the mouse side to side; some people move it up and down. Some bang on the keyboard.”

Because your reaction is so individual, it’s hard for a fraudulent user to fake. And because customers never know the monitoring technology is there, it doesn’t impose the kind of visible, and irritating, roadblocks that typically accompany security tests. You don’t need to press your thumb on your phone’s fingerprint reader or type in an authentication code.
 


Biometric software can also determine the pressure you tend to apply to your phone when you tap and type.

“We don’t have to sit people down in a room and get them to type under perfect laboratory conditions,” said Neil Costigan, the chief executive of BehavioSec, a Palo Alto, Calif., company that makes software used by many Nordic banks. “You just watch them, silently, while they go about their normal account activities.”

896

Google tracks you even if you tell it not to

Google wants to know where you go so badly that it records your movements even when you explicitly tell it not to. An Associated Press investigation found that many Google services on Android devices and iPhones store your location data even if you’ve used privacy settings that say they will prevent it from doing so.

An app like Google Maps will remind you to allow access to location if you use it for navigating. If you agree to let it record your location over time, Google Maps will display that history for you in a “timeline” that maps out your daily movements. Storing your minute-by-minute travels carries privacy risks and has been used by police to determine the location of suspects — such as a warrant that police in Raleigh, North Carolina, served on Google last year to find devices near a murder scene. So the company will let you “pause” a setting called Location History. Google says that will prevent the company from remembering where you’ve been. Google’s support page on the subject states: “You can turn off Location History at any time. With Location History off, the places you go are no longer stored.” That isn’t true. Even with Location History paused, some Google apps automatically store time-stamped location data without asking.

For example, Google stores a snapshot of where you are when you merely open its Maps app. Automatic daily weather updates on Android phones pinpoint roughly where you are. And some searches that have nothing to do with location, like “chocolate chip cookies,” or “kids science kits,” pinpoint your precise latitude and longitude — accurate to the square foot — and save it to your Google account. The privacy issue affects some two billion users of devices that run Google’s Android operating software and hundreds of millions of worldwide iPhone users who rely on Google for maps or search.

644

Planet at Risk of Heading Towards Apocalyptic, Irreversible ‘Hothouse Earth’ State

This summer people have been suffering and dying because of heat waves and wildfires in many parts of the world. The past three years were the warmest ever recorded, and 2018 is likely to follow suit. What we do in the next 10-20 years will determine whether our planet remains hospitable to human life or slides down an irreversible path to what scientists in a major new study call “Hothouse Earth” conditions.

688

The Ultra-Pure, Super-Secret Sand That Makes Your Phone Possible

Alex Glover is a recently retired geologist who has spent decades hunting for valuable minerals in the hillsides and hollows of the Appalachian Mountains that surround Spruce Pine, North Carolina. The wooded mountains surrounding it, though, are rich in all kinds of desirable rocks, some valued for their industrial uses, some for their pure prettiness. But it’s the mineral in Glover’s bag — snowy white grains, soft as powdered sugar — that is by far the most important these days. It’s quartz, but not just any quartz. Spruce Pine, it turns out, is the source of the purest natural quartz — a species of pristine sand — ever found on Earth.

This ultra-elite deposit of silicon dioxide particles plays a key role in manufacturing the silicon used to make computer chips. In fact, there’s an excellent chance the chip that makes your laptop or cell phone work was made using sand from this obscure Appalachian backwater. Most of the world’s sand grains are composed of quartz, which is a form of silicon dioxide, also known as silica. High-purity silicon dioxide particles are the essential raw materials from which we make computer chips, fiber-optic cables, and other high-tech hardware — the physical components on which the virtual world runs.

959

As Google Maps Renames Neighbourhoods, Residents Fume

For decades, the district south of downtown and alongside San Francisco Bay here was known as either Rincon Hill, South Beach or South of Market. This spring, it was suddenly rebranded on Google Maps to a name few had heard: the East Cut. The peculiar moniker immediately spread digitally, from hotel sites to dating apps to Uber, which all use Google’s map data. The name soon spilled over into the physical world, too. Real-estate listings beckoned prospective tenants to the East Cut. And news organizations referred to the vicinity by that term.

“It’s degrading to the reputation of our area,” said Tad Bogdan, who has lived in the neighborhood for 14 years. In a survey of 271 neighbors that he organized recently, he said, 90 percent disliked the name. The swift rebranding of the roughly 170-year-old district is just one example of how Google Maps has now become the primary arbiter of place names. With decisions made by a few Google cartographers, the identity of a city, town or neighborhood can be reshaped, illustrating the outsize influence that Silicon Valley increasingly has in the real world.

646