Resources

YouTube, the Great Radicalizer

At one point during the 2016 presidential election campaign, I watched a bunch of videos of Donald Trump rallies on YouTube. I was writing an article about his appeal to his voter base and wanted to confirm a few quotations.

Soon I noticed something peculiar. YouTube started to recommend and “autoplay” videos for me that featured white supremacist rants, Holocaust denials and other disturbing content.

Since I was not in the habit of watching extreme right-wing fare on YouTube, I was curious whether this was an exclusively right-wing phenomenon. So I created another YouTube account and started watching videos of Hillary Clinton and Bernie Sanders, letting YouTube’s recommender algorithm take me wherever it would.

Before long, I was being directed to videos of a leftish conspiratorial cast, including arguments about the existence of secret government agencies and allegations that the United States government was behind the attacks of Sept. 11. As with the Trump videos, YouTube was recommending content that was more and more extreme than the mainstream political fare I had started with.

Intrigued, I experimented with nonpolitical topics. The same basic pattern emerged. Videos about vegetarianism led to videos about veganism. Videos about jogging led to videos about running ultramarathons.

It seems as if you are never “hard core” enough for YouTube’s recommendation algorithm. It promotes, recommends and disseminates videos in a manner that appears to constantly up the stakes. Given its billion or so users, YouTube may be one of the most powerful radicalizing instruments of the 21st century.

This is not because a cabal of YouTube engineers is plotting to drive the world off a cliff. A more likely explanation has to do with the nexus of artificial intelligence and Google’s business model. (YouTube is owned by Google.) For all its lofty rhetoric, Google is an advertising broker, selling our attention to companies that will pay for it. The longer people stay on YouTube, the more money Google makes.

What keeps people glued to YouTube? Its algorithm seems to have concluded that people are drawn to content that is more extreme than what they started with — or to incendiary content in general.

Is this suspicion correct? Good data is hard to come by; Google is loath to share information with independent researchers. But we now have the first inklings of confirmation, thanks in part to a former Google engineer named Guillaume Chaslot.

Mr. Chaslot worked on the recommender algorithm while at YouTube. He grew alarmed at the tactics used to increase the time people spent on the site. Google fired him in 2013, citing his job performance. He maintains the real reason was that he pushed too hard for changes in how the company handles such issues.

The Wall Street Journal conducted an investigation of YouTube content with the help of Mr. Chaslot. It found that YouTube often “fed far-right or far-left videos to users who watched relatively mainstream news sources,” and that such extremist tendencies were evident with a wide variety of material. If you searched for information on the flu vaccine, you were recommended anti-vaccination conspiracy videos.

It is also possible that YouTube’s recommender algorithm has a bias toward inflammatory content. In the run-up to the 2016 election, Mr. Chaslot created a program to keep track of YouTube’s most recommended videos as well as its patterns of recommendations. He discovered that whether you started with a pro-Clinton or pro-Trump video on YouTube, you were many times more likely to end up with a pro-Trump video recommended.

Combine this finding with other research showing that during the 2016 campaign, fake news, which tends toward the outrageous, included much more pro-Trump than pro-Clinton content, and YouTube’s tendency toward the incendiary seems evident.

YouTube has recently come under fire for recommending videos promoting the conspiracy theory that the outspoken survivors of the school shooting in Parkland, Fla., are “crisis actors” masquerading as victims. Jonathan Albright, a researcher at Columbia, recently “seeded” a YouTube account with a search for “crisis actor” and found that following the “up next” recommendations led to a network of some 9,000 videos promoting that and related conspiracy theories, including the claim that the 2012 school shooting in Newtown, Conn., was a hoax.

What we are witnessing is the computational exploitation of a natural human desire: to look “behind the curtain,” to dig deeper into something that engages us. As we click and click, we are carried along by the exciting sensation of uncovering more secrets and deeper truths. YouTube leads viewers down a rabbit hole of extremism, while Google racks up the ad sales.

Human beings have many natural tendencies that need to be vigilantly monitored in the context of modern life. For example, our craving for fat, salt and sugar, which served us well when food was scarce, can lead us astray in an environment in which fat, salt and sugar are all too plentiful and heavily marketed to us. So too our natural curiosity about the unknown can lead us astray on a website that leads us too much in the direction of lies, hoaxes and misinformation.

In effect, YouTube has created a restaurant that serves us increasingly sugary, fatty foods, loading up our plates as soon as we are finished with the last meal. Over time, our tastes adjust, and we seek even more sugary, fatty foods, which the restaurant dutifully provides. When confronted about this by the health department and concerned citizens, the restaurant managers reply that they are merely serving us what we want.

This situation is especially dangerous given how many people — especially young people — turn to YouTube for information. Google’s cheap and sturdy Chromebook laptops, which now make up more than 50 percent of the pre-college laptop education market in the United States, typically come loaded with ready access to YouTube.

This state of affairs is unacceptable but not inevitable. There is no reason to let a company make so much money while potentially helping to radicalize billions of people, reaping the financial benefits while asking society to bear so many of the costs.

841

‘Living laboratories’: the Dutch cities amassing data on oblivious residents

Stratumseind in Eindhoven is one of the busiest nightlife streets in the Netherlands. On a Saturday night, bars are packed, music blares through the street, laughter and drunken shouting bounces off the walls. As the night progresses, the ground becomes littered with empty shot bottles, energy drink cans, cigarette butts and broken glass.

It’s no surprise that the place is also known for its frequent fights. To change that image, Stratumseind has become one of the “smartest” streets in the Netherlands. Lamp-posts have been fitted with wifi-trackers, cameras and 64 microphones that can detect aggressive behaviour and alert police officers to altercations. There has been a failed experiment to change light intensity to alter the mood. The next plan, starting this spring, is to diffuse the smell of oranges to calm people down. The aim? To make Stratumseind a safer place.

We get that comment a lot – ‘Big brother is watching you’. I prefer to say, ‘Big brother is helping you’

All the while, data is being collected and stored. “Visitors do not realise they are entering a living laboratory,” says Maša Galic, a researcher on privacy in the public space for the Tilburg Institute of Law, Technology and Society. Since the data on Stratumseind is used to profile, nudge or actively target people, this “smart city” experiment is subject to privacy law. According to the Dutch Personal Data Protection Act, people should be notified in advance of data collection and the purpose should be specified – but in Stratumseind, as in many other “smart cities”, this is not the case.

Peter van de Crommert is involved at Stratumseind as project manager with the Dutch Institute for Technology, Safety and Security. He says visitors do not have to worry about their privacy: the data is about crowds, not individuals. “We often get that comment – ‘Big brother is watching you’ – but I prefer to say, ‘Big brother is helping you’. We want safe nightlife, but not a soldier on every street corner.”

When we think of smart cities, we usually think of big projects: Songdo in South Korea, the IBM control centre in Rio de Janeiro or the hundreds of new smart cities in India. More recent developments include Toronto, where Google will build an entirely new smart neighbourhood, and Arizona, where Bill Gates plans to build his own smart city. But the reality of the smart city is that it has stretched into the everyday fabric of urban life – particularly so in the Netherlands.

In the eastern city of Enschede, city traffic sensors pick up your phone’s wifi signal even if you are not connected to the wifi network. The trackers register your MAC address, the unique network card number in a smartphone. The city council wants to know how often people visit Enschede, and what their routes and preferred spots are. Dave Borghuis, an Enschede resident, was not impressed and filed an official complaint. “I don’t think it’s okay for the municipality to track its citizens in this way,” he said. “If you walk around the city, you have to be able to imagine yourself unwatched.”

Enschede is enthusiastic about the advantages of the smart city. The municipality says it is saving €36m in infrastructure investments by launching a smart traffic app that rewards people for good behaviour like cycling, walking and using public transport. (Ironically, one of the rewards is a free day of private parking.) Only those who mine the small print will discover that the app creates “personal mobility profiles”, and that the collected personal data belongs to the company Mobidot.
‘Targeted supervision’ in Utrecht

Companies are getting away with it in part because it involves new applications of data. In Silicon Valley, they call it “permissionless innovation”, they believe technological progress should not be stifled by public regulations. For the same reason, they can be secretive about what data is collected in a public space and what it is used for. Often the cities themselves don’t know.

Utrecht keeps track of the number of boys and girls hanging in the streets, their age and whether they are acquaintances

Utrecht has become a tangle of individual pilots and projects, with no central overview of how many cameras and sensors exist, nor what they do. In 2014, the city invested €80m in data-driven management that launched in 80 projects. Utrecht now has a burglary predictor, a social media monitoring room, and smart bins and smart streetlights with sensors (although the city couldn’t say where these are located). It has scanner cars that dispense parking tickets, with an added bonus of detecting residents with a municipal tax debt according to the privacy regulation of the scanner cars. But when I asked the city to respond to a series of questions on just 22 of the smart projects, it could only answer for five of them, referring me to private companies for the rest of the answers.

The city also keeps track of the number of young people hanging out in the streets, their age group, whether they know each other, the atmosphere and whether or not they cause a nuisance. Special enforcement officers keep track of this information through mobile devices. It calls this process “targeted and innovative supervision”. Other council documents mention the prediction of school drop-outs, the prediction of poverty and the monitoring of “the health of certain groups” with the aim of “intervening faster”.

Like many cities, Utrecht argues that it acts in accordance with privacy laws because it anonymises or pseudonymises data (assigning it a number instead of a name or address). But pseudonymised personal data is still personal data. “The process is not irreversible if the source file is stored,” says Mireille Hildebrandt, professor of ICT and Law at Radboud University. “Moreover, if you build personal profiles and act on them, it is still a violation of privacy and such profiling can – unintentionally – lead to discrimination.” She points to Utrecht’s plan to register the race and health data of prostitutes, which came in for heavy criticism from the Dutch Data Protection Authority.

Another unanswered question regards who owns data that is collected in a public space. Arjen Hof is director of Civity, a company that builds data platforms for governments. “Public authorities are increasingly outsourcing tasks to private companies. Think of waste removal or street lighting,” he says. “But they do not realise that at the same time a lot of data is collected, and do not always make agreements about the ownership of data.”
‘A smart city is a privatised city’

Hof gives the example of CityTec, a company that manages 2,000 car parks, 30,000 traffic lights and 500,000 lamp-posts across the Netherlands. It refused to share with municipalities the data it was collecting through its lamp-post sensors. “Their argument was that, although the municipality is legally owner of the lamp-posts, CityTec is the economic owner and, for competitive reasons, did not want to make the data available,” Hof says. This was three years ago, but for a lot of companies it remains standard practice. Companies dictate the terms, and cities say they can’t share the contracts because it contains “competition-sensitive information”.

When I interviewed the technology writer Evgeny Morozov in October, he warned of cities becoming too dependent on private companies. “The culmination of the smart city is a privatised city,” he said. “A city in which you have to pay for previously free services.”

Morozov’s fear about public subsidies being used for private innovation is well illustrated in Assen, a city of 70,000 people in the north of the country. Assen built a fibre-optic network for super-fast internet in 2011, to which it connected 200 sensors that measure, among other things, the flow of cars. There was an experiment to steer people around traffic jams, even though traffic in the city is relatively light. The city also connected its traffic lights, parking garages and parking signs to this grid. The cost of €46m was split between Brussels, the national government, the province and the municipality. Companies such as the car navigation firm TomTom have used the sensor network to test new services.

The project, called Sensor City, filed for bankruptcy a year ago. Now the publicly funded fibre-optic network, sensors and all, will be sold to a still-unidentified private company. The municipality will have to strike a deal with the new owner about the use of its public traffic lights and parking signs.

774

Young people would rather have an Internet connection than daylight

The average young person in Britain think having access to the internet is more important than daylight, according to a new poll.

British youths aged between 18 and 25 were asked to identify five things which they felt were important to maintain their quality of life.

Freedom of speech topped the list, picked by 81% of the 2,465 surveyed. Nearly seven in 10 (69%) chose internet connect, followed by 64% saying daylight and 57% hot water.

Only 37% said a welfare system – including the NHS – was important, with a measly 11% choosing a good nights’ sleep.

The respondents who identified an internet connection as one of the most important aspects were asked how many times they used the internet every day. The average answer was 78 times.

The youths were also asked to identify what they would most like to change in order to improve their quality of life. The majority (34%) stated holidays, followed by more sleep (28%) and “having a bigger following on social media” (14%).

671

Children struggle to hold pencils due to too much tech, doctors say

Children are increasingly finding it hard to hold pens and pencils because of an excessive use of technology, senior paediatric doctors have warned. An overuse of touchscreen phones and tablets is preventing children’s finger muscles from developing sufficiently to enable them to hold a pencil correctly, they say.

“Children are not coming into school with the hand strength and dexterity they had 10 years ago,” said Sally Payne, the head paediatric occupational therapist at the Heart of England foundation NHS Trust. “Children coming into school are being given a pencil but are increasingly not be able to hold it because they don’t have the fundamental movement skills.

“To be able to grip a pencil and move it, you need strong control of the fine muscles in your fingers. Children need lots of opportunity to develop those skills.”

Payne said the nature of play had changed. “It’s easier to give a child an iPad than encouraging them to do muscle-building play such as building blocks, cutting and sticking, or pulling toys and ropes. Because of this, they’re not developing the underlying foundation skills they need to grip and hold a pencil.”

Although the early years curriculum has handwriting targets for every year, different primary schools focus on handwriting in different ways – with some using tablets alongside pencils, Prunty said. This becomes a problem when same the children also spend large periods of time on tablets outside school.

738

Facebook silently enables facial recognition abilities for users outside EU and Canada

Facebook is now informing users around the world that it’s rolling out facial recognition features. In December, we reported the features would be coming to the platform; that roll out finally appears to have begun. It should be noted that users in the European Union and Canada will not be notified because laws restrict this type of activity in those areas.

With the new tools, you’ll be able to find photos that you’re in but haven’t been tagged in; they’ll help you protect yourself against strangers using your photo; and Facebook will be able to tell people with visual impairments who’s in their photos and videos. By default, Facebook warns that this feature is enabled but can be switched off at any time; additionally, the firm says it may add new capabilities at any time.

While Facebook may want its users to “feel confident” uploading pictures online, it will likely give many other users the heebie-jeebies when they think of the colossal database of faces that Facebook has and what it could do with all that data. Even non-users should be cautious which photos they include themselves in if they don’t want to be caught up in Facebook’s web of data.

765

Nearly half of parents worry their child is addicted to mobile devices

Parents berate themselves for staying glued to their smartphones. But they’re even more worried their kids can’t detach from the small screen.

A survey from Common Sense Media and SurveyMonkey found 47% of parents worry their child is addicted to their mobile device. By comparison, only 32% of parents say they’re addicted themselves.

Half of parents also say they are at least somewhat concerned about how mobile devices will affect their kids’ mental health. Nearly one in five say they’re “extremely” or “very” concerned.

“For as much attention as technology addiction receives among adults, parents — particularly those with teenagers — are far more concerned about their children’s device usage than their own,” Jon Cohen, chief research officer with SurveyMonkey, said in a statement Thursday.

According to the survey, 89% of parents believe it’s up to them to curb their children’s smartphone usage.

The survey conducted between Jan. 25 and Jan. 29 included a sample of 4,201 adults, including 1,024 parents with children under age 18. Data was weighted to reflect the demographic composition of the U.S. for adults over 18, based on Census data.

Recently, tech giants including Facebook, Google and Apple have been pushed to come up with solutions to prevent kids from growing addicted to technology.

This month, Common Sense Media and the non-profit Center for Humane Technology launched a campaign to explore the mental health consequences of technology.

The Truth About Tech campaign will be funded by $7 million from Common Sense and money raised by the new non-profit. At the time the campaign was launched, Common Sense said it would also use donated airtime from Comcast and DirecTV. The nonprofit now says its earlier statement is not accurate. Comcast said it is not supporting the Truth in Tech campaign. DirecTV would not comment.

“Parental concerns about technology addiction and the content children are exposed to on devices is very real, yet parents feel that they alone are responsible for managing these issues,” Common Sense Media CEO James P. Steyer said. “It would be nice if the tech companies would partner with parents in this effort.”

Former employees of Facebook and Google are among those leading the charge to urge tech companies to act.

Last month, two major Apple investors implored the company to explore ways to fight smartphone addiction among children. The investors collectively control about $2 billion in Apple shares. In response, Apple said it was adding more “robust” parental controls to its devices.

Common Sense and other children advocacy groups have particularly criticized Facebook for recently rolling out a Messenger Kids app aimed at kids under 13. Facebook has defended its decision to go ahead with it, pointing to the advice it received from a team of child experts and efforts it took to make sure parents had control over the app. But a recent Wired report detailing Facebook’s financial support of these experts, which Facebook says covers logistics costs for their time, added more momentum to the controversy over whether tech companies are trying to get kids hooked too early.

Many devices and services feature parental controls, but some parents may not be aware they exist. The Common Sense-SurveyMonkey survey found 22% of parents did not know YouTube — which has faced scrutiny over how easy it is for kids to find inappropriate videos — offered parental controls. Also, 37% have not used the controls before.

That’s a problem as YouTube often is the go-to entertainment platform of choice for kids, who have made creators such as Logan Paul superstars.

Among parents surveyed who say their kids watch YouTube videos, 62% said their kids have seen inappropriate videos on the site. Most, or 81%, said it’s the parents’ job to prevent kids from seeing these videos.

For parents who worry their kids spend too much time on their smartphones, here are some tips:

*Set time limits and enforce them. Block out time during the day where your kids can use a smartphone or tablet. And don’t give in when they might beg for “one more minute.”

*Explore parental controls. Most services such as YouTube offer them, but your smartphone has its own suite of tools to tailor the experience to your kids.

*Try zones where tech is not allowed. Want a phone-free dinner? Or no gadgets before bed time? Consider creating areas in your home where technology is completely off limits. Just remember parents have to follow these guidelines, too.

686

Efforts grow to help students evaluate what they see online

Alarmed by the proliferation of false content online, state lawmakers [in the United States] are pushing schools to put more emphasis on teaching students how to tell fact from fiction.

Lawmakers in several states have introduced or passed bills calling on public school systems to do more to teach media literacy skills that they say are critical to democracy. The effort has been bipartisan but has received little attention despite successful legislation in Washington state, Connecticut, Rhode Island and New Mexico.

Advocates say the K-12 curriculum has not kept pace with rapid changes in technology. Studies show many children spend hours every day online but struggle to comprehend the content that comes at them.

For years, they have pushed schools to incorporate media literacy — including the ability to evaluate and analyze sources of information — into lesson plans in civics, language arts, science and other subjects.

715

How Do You Vote? 50 Million Google Images Give a Clue

What vehicle is most strongly associated with Republican voting districts? Extended-cab pickup trucks. For Democratic districts? Sedans.

Those conclusions may not be particularly surprising. After all, market researchers and political analysts have studied such things for decades.

But what is surprising is how researchers working on an ambitious project based at Stanford University reached those conclusions: by analyzing 50 million images and location data from Google Street View, the street-scene feature of the online giant’s mapping service.

For the first time, helped by recent advances in artificial intelligence, researchers are able to analyze large quantities of images, pulling out data that can be sorted and mined to predict things like income, political leanings and buying habits. In the Stanford study, computers collected details about cars in the millions of images it processed, including makes and models.

Identifying so many car images in such detail was a technical feat. But it was linking that new data set to public collections of socioeconomic and environmental information, and then tweaking the software to spot patterns and correlations, that makes the Stanford project part of what computer scientists see as the broader application of image data.

738

Forbes: Cellebrite can unlock every iPhone

Cellebrite, a Petah Tikva, Israel-based vendor that’s become the U.S. government’s company of choice when it comes to unlocking mobile devices, is this month telling customers its engineers currently have the ability to get around the security of devices running iOS 11 . That includes the iPhone X, a model that Forbes has learned was successfully raided for data by the Department for Homeland Security back in November 2017, most likely with Cellebrite technology.

The Israeli firm, a subsidiary of Japan’s Sun Corporation, hasn’t made any major public announcement about its new iOS capabilities. But Forbes was told by sources (who asked to remain anonymous as they weren’t authorized to talk on the matter) that in the last few months the company has developed undisclosed techniques to get into iOS 11 and is advertising them to law enforcement and private forensics folk across the globe. Indeed, the company’s literature for its Advanced Unlocking and Extraction Services offering now notes the company can break the security of “Apple iOS devices and operating systems, including iPhone, iPad, iPad mini, iPad Pro and iPod touch, running iOS 5 to iOS 11.” Separately, a source in the police forensics community told Forbes he’d been told by Cellebrite it could unlock the iPhone 8. He believed the same was most probably true for the iPhone X, as security across both of Apple’s newest devices worked in much the same way.

740

Apple in Talks to Buy Cobalt Directly From Miners

Apple Inc. is in talks to buy long-term supplies of cobalt directly from miners for the first time, according to people familiar with the matter, seeking to ensure it will have enough of the key battery ingredient amid industry fears of a shortage driven by the electric vehicle boom…

681

Boston Dynamics is “teaching” its robot dog to fight back against humans

Boston Dynamics’ well-mannered [sic] four-legged machine SpotMini has already proved that it can easily open a door and walk through unchallenged, but now the former Google turned SoftBank robotics firm is teaching its robo-canines to fight back.

A newly released video shows SpotMini approaching the door as before, but this time it’s joined by a pesky human with an ice hockey stick. Unperturbed by his distractions, SpotMini continues to grab the handle and turn it even after its creepy fifth arm with a claw on the front is pushed away.

If that assault wasn’t enough, the human’s robot bullying continues, shutting the door on Spot, which counterbalances and fights back against the pressure. In a last-ditch effort to stop the robot dog breaching the threshold, the human grabs at a leash attached to the back of the SpotMini and yanks.

The robot valiantly trudges forward attempting to shake off this cowardly move, losing its tail in the process and looking ever more like a dog fighting its owner. Eventually the human gives in, SpotMini rights itself, lines up with the door, grabs the handle and across the threshold it goes.

Boston Dynamics describes the video as “a test of SpotMini’s ability to adjust to disturbances as it opens and walks through a door” because “the ability to tolerate and respond to disturbances like these improves successful operation of the robot”. The firm helpfully notes that, despite a back piece flying off, “this testing does not irritate or harm the robot”.

675

Is social media causing childhood depression?

Rangan Chatterjee is a GP and says he has seen plenty of evidence of the link between mental ill-health in youngsters and their use of social media.

One 16 year-old boy was referred to him after he self-harmed and ended up in A&E.

“The first thought was to put him on anti-depressants but I chatted to him and it sounded like his use of social media was having a negative impact on his health.”

So Dr Chatterjee suggested a simple solution – the teenager should attempt to wean himself off social media, restricting himself to just an hour before he went to bed. Over the course of a few weeks, he should extend this to two hours at night and two in the morning.

“He reported a significant improvement in his wellbeing and, after six months, I had a letter from his mother saying he was happier at school and integrated into the local community.”

That and similar cases have led him to question the role social media plays in the lives of young people.

“Social media is having a negative impact on mental health,” he said. “I do think it is a big problem and that we need some rules. How do we educate society to use technology so it helps us rather than harms us?”

A 2017 study by The Royal Society of Public Health asked 1,500 young people aged 11-25 to track their moods while using the five most popular social media sites.

It suggested Snapchat and Instagram were the most likely to inspire feelings of inadequacy and anxiety. YouTube had the most positive influence.

Seven in 10 said Instagram made them feel worse about body image and half of 14-24-year-olds reported Instagram and Facebook exacerbated feelings of anxiety. Two-thirds said Facebook made cyber-bullying worse.

Consultant psychiatrist Louise Theodosiou says one of the clearest indications children are spending too long on their phones is their behaviour during a session with a psychiatrist.

“Two or three years ago, it was very unusual for a child to answer their phone or text during an appointment. But now it is common,” said the Royal Manchester Children’s Hospital doctor.

She has seen a rise in cases where social media is a contributing factor in teenage depression, anxiety and other mental health issues. These problems are often complex and wide-ranging – from excessive use of gaming or social media sites to feelings of inadequacy brought on by a constant bombardment of social media images of other people’s lives, to cyber-bullying.

Often such children will refuse to travel to psychiatrist appointments, so a range of professionals have to make home visits to deal with the issue. It can take months to persuade them to leave their bedrooms.

“These kids are living in a fictional world, sometimes to the detriment of their physical health. They might have physical ill-health, like toothache, but they are still not wanting to leave their virtual worlds,” she said.

Dr Theodosiou has seen first-hand how difficult it can be for parents. She has heard of some sleeping with the home router to make sure the children cannot connect to the wi-fi in the middle of the night.

Even for those children whose social media use may be judged normal, there are still dangers in the way the internet has become a conduit into the lives of friends and celebrities.

709
Stare Into The Lights My Pretties

Energy riches fuel bitcoin craze for speculation-shy Iceland

Iceland is expected to use more energy ‘mining’ bitcoins and other virtual currencies this year than it uses to power its homes.

With massive amounts of electricity needed to run the computers that create bitcoins, large virtual currency companies have established a base in the North Atlantic island nation blessed with an abundance of ‘renewable energy.’

The energy demand has developed because of the soaring cost of producing and collecting virtual currencies. Computers are used to make the complex calculations that verify a running ledger of all the transactions in virtual currencies around the world.

Among the main attractions of setting up bitcoin mines at the edge of the Arctic Circle is the natural cooling for computer servers and the competitive prices for Iceland’s abundance of renewable energy from geothermal and hydroelectric power plants.

Johann Snorri Sigurbergsson, a business development manager at the energy company Hitaveita Sudurnesja, said he expected Iceland’s virtual currency mining to double its energy consumption to about 100 megawatts this year. That is more than households use on the island nation of 340,000, according to Iceland’s National Energy Authority.

Pirate Party legislator McCarthy has questioned the value of bitcoin mining for Icelandic society, saying residents should consider regulating and taxing the emerging industry.

“We are spending tens or maybe hundreds of megawatts on producing something that has no tangible existence and no real use for humans outside the realm of financial speculation,” he said. “That can’t be good.”

656

Researchers create simulation of a worm’s neural network

Researchers at the Technische Universitat Wein have created a simulation of a simple worm’s neural network, and have been able to replicate its natural behavior to completely mimic the worm’s natural reflexive behavior. According to the article, using a simple neural network of 300 neurons, the simulation of “the worm can find its way, eat bacteria and react to certain external stimuli. It can, for example, react to a touch on its body. A reflexive response is triggered and the worm squirms away. This behavior is determined by the worm’s nerve cells and the strength of the connections between them. When this simple reflex network is recreated on a computer, the simulated worm reacts in exactly the same way to a virtual stimulation — not because anybody programmed it to do so, but because this kind of behavior is hard-wired in its neural network.” Using the same neural network without adding any additional nerve cells, Mathias Lechner, Radu Grosu, and Ramin Hasani were able to have the nematode simulation learn to balance a pole “just by tuning the strength of the synaptic connections. This basic idea (tuning the connections between nerve cells) is also the characteristic feature of any natural learning process.”

778
Stare Into The Lights My Pretties

Early Facebook and Google employees are planning to lobby against tech addiction

A new alliance made up of former Silicon Valley cronies has assembled to challenge the technological Frankenstein they’ve collectively created. “The Center for Humane Technology” is a group comprising former employees and pals of Google, Facebook, and Mozilla. The nonprofit hopes that it can raise awareness about the societal tolls of technology, which its members believe are inherently addictive. The group will lobby for a bill to research the effects of technology on children’s health.

On Feb. 7, the group’s members will participate in a conference focused on digital health for kids, hosted by the nonprofit Common Sense.

The group also plans an anti-tech addiction ad campaign at 55,000 schools across America, and has another $50 million in media airtime donated by partners which include Comcast and DirecTV.

The group’s co-founder, a former Google design ethicist, told Quartz that tech companies “profit by drilling into our brains to pull the attention out of it, by using persuasion techniques to keep [us] hooked.” And the group’s web page argues that “What began as a race to monetize our attention is now eroding the pillars of our society: mental health, democracy, social relationships, and our children.”

770

Google and Facebook are watching our every move online

You may know that hidden trackers lurk on most websites you visit, soaking up your personal information. What you may not realize, though, is 76 percent of websites now contain hidden Google trackers, and 24 percent have hidden Facebook trackers, according to the Princeton Web Transparency & Accountability Project. The next highest is Twitter with 12 percent. It is likely that Google or Facebook are watching you on many sites you visit, in addition to tracking you when using their products. As a result, these two companies have amassed huge data profiles on each person, which can include your interests, purchases, search, browsing and location history, and much more. They then make your sensitive data profile available for invasive targeted advertising that can follow you around the Internet.

So how do we move forward from here? Don’t be fooled by claims of self-regulation, as any useful long-term reforms of Google and Facebook’s data privacy practices fundamentally oppose their core business models: hyper-targeted advertising based on more and more intrusive personal surveillance. Change must come from the outside. Unfortunately, we’ve seen relatively little from Washington. Congress and federal agencies need to take a fresh look at what can be done to curb these data monopolies. They first need to demand more algorithmic and privacy policy transparency, so people can truly understand the extent of how their personal information is being collected, processed and used by these companies. Only then can informed consent be possible. They also need to legislate that people own their own data, enabling real opt-outs. Finally, they need to restrict how data can be combined including being more aggressive at blocking acquisitions that further consolidate data power, which will pave the way for more competition in digital advertising. Until we see such meaningful changes, consumers should vote with their feet.

746

Facebook Really Wants You to Come Back

The social network is getting aggressive with people who don’t log in often, working to keep up its engagement numbers.

It’s been about a year since Rishi Gorantala deleted the Facebook app from his phone, and the company has only gotten more aggressive in its emails to win him back. The social network started out by alerting him every few days about friends that had posted photos or made comments—each time inviting him to click a link and view the activity on Facebook. He rarely did.

Then, about once a week in September, he started to get prompts from a Facebook security customer-service address. “It looks like you’re having trouble logging into Facebook,” the emails would say. “Just click the button below and we’ll log you in. If you weren’t trying to log in, let us know.” He wasn’t trying. But he doesn’t think anybody else was, either.

“The content of mail they send is essentially trying to trick you,” said Gorantala, 35, who lives in Chile. “Like someone tried to access my account so I should go and log in.”

774

Car Manufacturers Are Tracking Millions of Cars

Millions of new cars sold in the US and Europe are “connected,” having some mechanism for exchanging data with their manufacturers after the cars are sold; these cars stream or batch-upload location data and other telemetry to their manufacturers, who argue that they are allowed to do virtually anything they want with this data, thanks to the “explicit consent” of the car owners — who signed a lengthy contract at purchase time that contained a vague and misleading clause deep in its fine-print.

Slashdot reader Luthair adds that “OnStar infamously has done this for some time, even if the vehicle’s owner was not a subscriber of their services.” But now 78 million cars have an embedded cyber connection, according to one report, with analysts predicting 98% of new cars will be “connected” by 2021. The Washington Post calls it “Big Brother on Wheels.”

“Carmakers have turned on a powerful spigot of precious personal data, often without owners’ knowledge, transforming the automobile from a machine that helps us travel to a sophisticated computer on wheels that offers even more access to our personal habits and behaviors than smartphones do.”

735

Facebook should be ‘regulated like cigarette industry’, says tech CEO

Facebook should be regulated like a cigarette company, because of the addictive and harmful properties of social media, according to Salesforce chief executive Marc Benioff.

Last week, venture capitalist Roger McNamee – an early investor in Facebook – wrote a Guardian column warning that the company would would have to “address the harm the platform has caused through addiction and exploitation by bad actors”.

“I was once Mark Zuckerberg’s mentor, but I have not been able to speak to him about this. Unfortunately, all the internet platforms are deflecting criticism and leaving their users in peril,” McNamee wrote.

Earlier, Sean Parker, Facebook’s first President, had described the business practice of social media firms as “a social-validation feedback loop … exactly the kind of thing that a hacker like myself would come up with, because you’re exploiting a vulnerability in human psychology”. Parker now describes himself as “something of a conscientious objector” to social media.

As part of its attempt to win back control of the narrative, Facebook has announced it will begin taking into account how trusted a publisher is as part of its News Feed algorithm. The company’s metric for determining trust, however, is a simple two-question survey, causing some to query its potential.

700

Study links decline in teenagers’ happiness to smartphones

A precipitous drop in the happiness, self-esteem and life satisfaction of American teenagers came as their ownership of smartphones rocketed from zero to 73 percent and they devoted an increasing share of their time online. Coincidence? New research suggests it is not. In a study published Monday in the journal Emotion, psychologists from San Diego State University and the University of Georgia used data on mood and media culled from roughly 1.1 million U.S. teens to figure out why a decades-long rise in happiness and satisfaction among U.S. teenagers suddenly shifted course in 2012 and declined sharply over the next four years.

In the new study, researchers tried to find it by plumbing a trove of eighth-, 10th- and 12th-graders’ responses to queries on how they felt about life and how they used their time. They found that between 1991 and 2016, adolescents who spent more time on electronic communication and screens — social media, texting, electronic games, the internet — were less happy, less satisfied with their lives and had lower self-esteem. TV watching, which declined over the nearly two decades they examined, was similarly linked to lower psychological well-being.

By contrast, adolescents who spent more time on non-screen activities had higher psychological well-being. They tended to profess greater happiness, higher self-esteem and more satisfaction with their lives. While these patterns emerged in the group as a whole, they were particularly clear among eighth- and 10th-graders, the authors found: “Every non-screen activity was correlated with greater happiness, and every screen activity was correlated with less happiness.”

736