Resources

U.S. government begins asking foreign travelers about their social media at border

“Foreign travelers arriving in the United States on the visa waiver program have been presented with an “optional” request to “enter information associated with your online presence,” a government official confirmed Thursday. The prompt includes a drop-down menu that lists platforms including Facebook, Google+, Instagram, LinkedIn and YouTube, as well as a space for users to input their account names on those sites. The new policy comes as Washington tries to improve its ability to spot and deny entry to individuals who have ties to terrorist groups like the Islamic State. But the government has faced a barrage of criticism since it first floated the idea last summer. The Internet Association, which represents companies including Facebook, Google and Twitter, at the time joined with consumer advocates to argue the draft policy threatened free expression and posed new privacy and security risks to foreigners. Now that it is final, those opponents are furious the Obama administration ignored their concerns. The question itself is included in what’s known as the Electronic System for Travel Authorization, a process that certain foreign travelers must complete to come to the United States. ESTA and a related paper form specifically apply to those arriving here through the visa-waiver program, which allows citizens of 38 countries to travel and stay in the United States for up to 90 days without a visa.”

724

“Information glut no problem for most Americans: survey”

“Most Americans do not see “information overload” as a problem for them despite the explosion of internet data and images, according to a Pew Research Center survey on Wednesday.

Only 20 percent of U.S. adults feel they get more information than they can handle, down from 27 percent a decade ago. Just over three-quarters like having so much information at hand, the survey of 1,520 people showed.

“Generally, Americans appreciate lots of information and access to it,” said the report into how U.S. adults cope with information demands.

Roughly four in five Americans agree that they are confident about using the internet to keep up with information demands, that a lot of information gives them a feeling of more control over their lives, and that they can easily determine what information is trustworthy.

Americans who are 65 or older, have a high school diploma or less and earn less than $30,000 a year are more likely to say they face a glut of information.

Eighty-four percent of Americans with online access through three sources – home broadband, smartphone and tablet computer – say they like having so much information available.

By contrast, 55 percent of those with no online source felt overwhelmed by the amount of possible information.

The term “information overload” was popularized by author Alvin Toffler in his 1970 bestseller “Future Shock.” It refers to difficulties that people face from getting too much information or data.

The Pew survey involved people over 18 interviewed by landline or cell phones from March 7 to April 4. The margin of error was 2.9 percentage points, meaning results could vary by that much either way.”

741

“Smart” toys are spying on kids

Emphasis added:

“Some people consider dolls creepy enough, but what if that deceptively cute toy was listening to everything you said and, worse yet, letting creeps speak through it?

According to The Center for Digital Democracy, a pair of smart toys designed to engage with children in new and entertaining ways are rife with security and privacy holes. The watchdog group was so concerned, they filed a complaint with the Federal Trade Commission on Dec. 6 (you can read the full complaint here). A similar one was also filed in Europe by the Norwegian Consumer Council.

“This complaint concerns toys that spy,” reads the complaint, which claims the Genesis Toys’ My Friend Cayla and i-QUE Intelligent Robot can record and collect private conversations and offer no limitations on the collection and use of personal information.

Both toys use voice recognition, internet connectivity and Bluetooth to engage with children in conversational manner and answer questions. The CDD claims they do all of this in wildly insecure and invasive ways.

Both My Friend Cayla and i-QUE use Nuance Communications’ voice-recognition platform to listen and respond to queries. On the Genesis Toy site, the manufacturer notes that while “most of Cayla’s conversational features can be accessed offline,” searching for information may require an internet connection.

The promotional video for Cayla encourages children to “ask Cayla almost anything.”

The dolls work in concert with mobile apps. Some questions can be asked directly, but the toys maintain a constant Bluetooth connection to the dolls so they can also react to actions in the app and even appear to identify objects the child taps on on screen.

The CDD takes particular issue with that app and lists all the questions it asks children (or their parents) up front during registration: everything from the child and her parent’s names to their school, and where they live.

805

“Social Media” has destroyed discourse

Hossein Derakshan, an Iranian-Canadian author, media analyst, and performance artist writes in MIT Technology Review:

“Like TV, social media now increasingly entertains us, and even more so than television it amplifies our existing beliefs and habits. It makes us feel more than think, and it comforts more than challenges. The result is a deeply fragmented society, driven by emotions, and radicalized by lack of contact and challenge from outside. This is why Oxford Dictionaries designated “post-truth” as the word of 2016: an adjective “relating to circumstances in which objective facts are less influential in shaping public opinion than emotional appeals.”

[…]

Traditional television still entails some degree of surprise. What you see on television news is still picked by human curators, and even though it must be entertaining to qualify as worthy of expensive production, it is still likely to challenge some of our opinions (emotions, that is).

Social media, in contrast, uses algorithms to encourage comfort and complaisance, since its entire business model is built upon maximizing the time users spend inside of it. Who would like to hang around in a place where everyone seems to be negative, mean, and disapproving? The outcome is a proliferation of emotions, a radicalization of those emotions, and a fragmented society. This is way more dangerous for the idea of democracy founded on the notion of informed participation.

This means we should write and read more, link more often, and watch less television and fewer videos — and spend less time on Facebook, Instagram, and YouTube.

Our habits and our emotions are killing us and our planet. Let’s resist their lethal appeal.”

810

UK “legitimises” illegal mass surveillance by passing new law

The “Investigatory Powers Act,” has been passed into law in the UK, legalising a number of illegal mass surveillance programs revealed by Edward Snowden in 2013. It also introduces new powers to require ISPs to retain browsing data on all customers for 12 months, while giving police new powers to hack into computers and phones and to collect communications data in bulk.

“Jim Killock, executive director of the Open Rights Group, responded…saying: “…it is one of the most extreme surveillance laws ever passed in a democracy. The IP Act will have an impact that goes beyond the UK’s shores. It is likely that other countries, including authoritarian regimes with poor human rights records, will use this law to justify their own intrusive surveillance powers.”

“Much of the Act gives stronger legal footing to the UK’s various bulk powers, including “bulk interception,” which is, in general terms, the collection of internet and phone communications en masse. In June 2013, using documents provided by Edward Snowden, The Guardian revealed that the GCHQ taps fibre-optic undersea cables in order to intercept emails, internet histories, calls, and a wealth of other data.”

Meanwhile, FBI and NSA poised to gain new surveillance powers under Trump.

Snooper Charter allows the State to tell lies in court.

“Charter gives virtually unrestricted powers not only to State spy organisations but also to the police and a host of other government agencies. The operation of the oversight and accountability mechanisms…are all kept firmly out of sight — and, so its authors hope, out of mind — of the public. It is up to the State to volunteer the truth to its victims if the State thinks it has abused its secret powers. “Marking your own homework” is a phrase which does not fully capture this…

Section 56(1)(b) creates a legally guaranteed ability — nay, duty — to lie about even the potential for State hacking to take place, and to tell juries a wholly fictitious story about the true origins of hacked material used against defendants in order to secure criminal convictions. This is incredibly dangerous. Even if you know that the story being told in court is false, you and your legal representatives are now banned from being able to question those falsehoods and cast doubt upon the prosecution story. Potentially, you could be legally bound to go along with lies told in court about your communications — lies told by people whose sole task is to weave a story that will get you sent to prison or fined thousands of pounds.

Moreover, as section 56(4) makes clear, this applies retroactively, ensuring that it is very difficult for criminal offences committed by GCHQ employees and contractors over the years, using powers that were only made legal a fortnight ago, to be brought to light in a meaningful way. It might even be against the law for a solicitor or barrister to mention in court this Reg story by veteran investigative journalist Duncan Campbell about GCHQ’s snooping station in Oman (covered by the section 56(1)(b) wording “interception-related conduct has occurred”) – or large volumes of material published on Wikileaks.

The existence of section 56(4) makes a mockery of the “general privacy protections” in Part 1 of the IPA, which includes various criminal offences. Part 1 was introduced as a sop to privacy advocates horrified at the full extent of the act’s legalisation of intrusive, disruptive and dangerous hacking powers for the State, including powers to force the co-operation of telcos and similar organisations. There is no point in having punishments for lawbreakers if it is illegal to talk about their law-breaking behaviour.

Like the rest of the Snoopers’ Charter, section 56 has become law. Apart from Reg readers and a handful of Twitter slacktivists, nobody cares. The general public neither knows nor cares what abuses and perversions of the law take place in its name. Theresa May and the British government have utterly defeated advocates of privacy and security, completely ignoring those who correctly identify the zero-sum game between freedom and security in favour of those who feel the need to destroy liberty in order to “save” it.

The UK is now a measurably less free country in terms of technological security, permitted speech and ability to resist abuses of power and position by agents of the State, be those shadowy spys, police inspectors and above (ie, shift leaders in your local cop shop) and even food hygiene inspectors – no, really.”

780

Internet freedom wanes as governments target messaging and “social apps”

“Roughly two-thirds of the world’s internet users live under regimes of government censorship, according to a report from Freedom House, a pro-democracy think tank. The report adds that internet freedom declined worldwide for a sixth consecutive year in 2016 with the governments increasingly crack down on social media services and messaging apps.

In a new development, the most routinely targeted tools this year were instant messaging and calling platforms, with restrictions often imposed during times of protests or due to national security concerns,” the report says. WhatsApp emerged as the most-blocked app, facing restrictions in 12 of the 65 studied countries. The report’s scope covers the experiences of some 88 percent of the world’s Internet users. And of all 65 countries reviewed, Internet freedom in 34 — more than half — has been on a decline over the past year. Particular downturns were marked in Uganda, Bangladesh, Cambodia, Ecuador and Libya. Facebook users were arrested in 27 countries, more than any other app or platform. And such arrests are spreading. Since June of last year, police in 38 countries have arrested people for what they said on social media — surpassing even the 21 countries, where people were arrested for what they published on more traditional platforms like blogs and news sites. “Some supposed offenses were quite petty, illustrating both the sensitivity of some regimes and the broad discretion given to police and prosecutors under applicable laws,” the report says.”

679

Is Google’s AI-driven image resizing algorithm ‘dishonest’?

The Stack reports on Google’s “new research into upscaling low-resolution images using machine learning to ‘fill in’ the missing details,” arguing this is “a questionable stance…continuing to propagate the idea that images contain some kind of abstract ‘DNA’, and that there might be some reliable photographic equivalent of polymerase chain reaction which could find deeper truth in low-res images than either the money spent on the equipment or the age of the equipment will allow.”

“Rapid and Accurate Image Super Resolution (RAISR) uses low and high resolution versions of photos in a standard image set to establish templated paths for upward scaling… This effectively uses historical logic, instead of pixel interpolation, to infer what the image would look like if it had been taken at a higher resolution.

It’s notable that neither their initial paper nor the supplementary examples feature human faces. It could be argued that using AI-driven techniques to reconstruct images raises some questions about whether upscaled, machine-driven digital enhancements are a legal risk, compared to the far greater expense of upgrading low-res CCTV networks with the necessary resolution, bandwidth and storage to obtain good quality video evidence.”

The article points out that “faith in the fidelity of these ‘enhanced’ images routinely convicts defendants.”

713

More people died taking selfies in India than anywhere in world, study says

“In 2015 alone, Indians taking selfies died while posing in front of an oncoming train, in a boat that tipped over at a picnic, on a cliff that gave way and crumbled into a 60-foot ravine and on the slippery edge of a scenic river canal. Also, a Japanese tourist trying to take a selfie fell down steps at the Taj Mahal, suffering fatal head injuries.

Researchers analysed thousands of selfies posted on Twitter and found that men were far more likely than women to take dangerous selfies. It found 13 per cent were taken in what could be dangerous circumstances, and the majority of victims were under the age of 24.

The most common cause of death worldwide was “falling off a building or mountain,” which was responsible for 29 deaths. The second most second-most common being hit by a train, responsible for 11 deaths.

The authors hope the study will serve as a warning of the hazards and inspire new mobile phone technology that can warn photo-takers if they are in a danger zone.

Last year, no-selfie zones were also established in certain areas of the massive Hindu religious gathering called the Kumbh Mela because organisers feared bottlenecks caused by selfie-takers could spark stampedes.”

809

Rich people pay less attention to other people, study

“In a small recent study, researchers from New York University found that those who considered themselves in higher classes looked at people who walked past them less than those who said they were in a lower class did. The results were published in the journal of the Association for Psychological Science.

According to Pia Dietze, a social psychology doctoral student at NYU and a lead author of the study, previous research has shown that people from different social classes vary in how they tend to behave towards other people. So, she wanted to shed some light on where such behaviours could have originated. The research was divided into three separate studies.

For the first, Dietze and NYU psychology lab director Professor Eric Knowles asked 61 volunteers to walk along the street for one block while wearing Google Glass to record everything they looked at. These people were also asked to identify themselves as from a particular social class: either poor, working class, middle class, upper middle class, or upper class. An independent group watched the recordings and made note of the various people and things each Glass wearer looked at and for how long. The results showed that class identification, or what class each person said they belonged to, had an impact on how long they looked at the people who walked past them.

During Study 2, participants viewed street scenes while the team tracked their eye movements. Again, higher class was associated with reduced attention to people in the images.

For the third and final study, the results suggested that this difference could stem from the way the brain works, rather than being a deliberate decision. Close to 400 participants took part in an online test where they had to look at alternating pairs of images, each containing a different face and five objects. Whereas higher class participants took longer to notice when the face was different in the alternate image compared to lower classes, the amount of time it took to detect the change of objects did not differ between them. The team reached the conclusion that faces seem to be more effective in grabbing the attention of individuals who come from relatively lower class backgrounds.”

678

Social media and the anti-fact age

Adam Turner at The Age writes:

“When you look at how social media works, it was inevitable that it would turn into one of the world’s most powerful propaganda tools. It’s often painted as a force for good, letting people bypass the traditional gatekeepers in order to quickly disseminate information, but there’s no guarantee that this information is actually true.

Facebook has usurped the role of the mainstream media in disseminating news, but hasn’t taken on the fourth estate’s corresponding responsibility for keeping the bastards honest. The mainstream media has no-one to blame but itself, having engaged in a tabloid race to the bottom which devalued truth to the point that blatant liars are considered more honest.

The fragmentation of news is already creating a filter bubble in that most people don’t tend to read the newspaper from front to back, or sit through entire news bulletins, they just pick and choose what interests them. The trouble with Facebook is that it also reinforces bias, the more extreme your political views the less likely you are to see anything with an opposing viewpoint which might help you develop a more well-rounded view of the world.”

Brooke Binkowski, the managing editor of the fact-checking at Snopes.com says, “Honestly, most of the fake news is incredibly easy to debunk because it’s such obvious bullshit…”

The problem, Binkowski believes, is that the public has lost faith in the media broadly — therefore no media outlet is considered credible any longer. The reasons are familiar: as the business of news has grown tougher, many outlets have been stripped of the resources they need for journalists to do their jobs correctly. “When you’re on your fifth story of the day and there’s no editor because the editor’s been fired and there’s no fact checker so you have to Google it yourself and you don’t have access to any academic journals or anything like that, you will screw stories up,” she says.”

UPDATE 1/12/2016 — Most students can’t spot fake news

“If you thought fake online news was a problem for impressionable adults, it’s even worse for the younger crowd. A Stanford study of 7,804 middle school, high school and college students has found that most of them couldn’t identify fake news on their own. Their susceptibility varied with age, but even a large number of the older students fell prey to bogus reports. Over two thirds of middle school kids didn’t see why they shouldn’t trust a bank executive’s post claiming that young adults need financial help, while nearly 40 percent of high schoolers didn’t question the link between an unsourced photo and the claims attached to it.

Why did many of the students misjudge the authenticity of a story? They were fixated on the appearance of legitimacy, rather than the quality of information. A large photo or a lot of detail was enough to make a Twitter post seem credible, even if the actual content was incomplete or wrong. There are plenty of adults who respond this way, we’d add, but students are more vulnerable than most.

As the Wall Street Journal explains, part of the solution is simply better education: teach students to verify sources, question motivations and otherwise think critically.”

(Emphasis added)

805

“Creepy new website makes its monitoring of your online behaviour visible”

“If YOU think you are not being analysed while browsing websites, it could be time to reconsider. A creepy new website called clickclickclick has been developed to demonstrate how our online behaviour is continuously measured.

The site, which observes and comments on your behaviour in detail, and is not harmful to your computer, contains nothing but a white screen and a large green button. From the minute you visit the website, it begins detailing your actions on the screen in real-time.

The site also encourages users to turn on their audio, which offers the even more disturbing experience of having an English voice comment about your behaviour.

Designer Roel Wouters said the experiment was aimed to remind people about the serious themes of big data and privacy. “It seemed fun to thematise this in a simple and lighthearted way,” he said.

Fellow designer Luna Maurer said the website her own experiences with the internet had helped with the project. “I am actually quite internet aware, but I am still very often surprised that after I watched something on a website, a second later I get instantly personalised ads,” she said.”

840

Chemical traces on your phone reveal your lifestyle, say forensic scientists

“Scientists say they can deduce the lifestyle of an individual, down to the kind of grooming products they use, food they eat and medications they take, from chemicals found on the surface of their mobile phone. Experts say analysis of someone’s phone could be a boon both to healthcare professionals, and the police.

“You can narrow down male versus female; if you then figure out they use sunscreen then you pick out the [people] that tend to be outdoorsy — so all these little clues can sort of narrow down the search space of candidate people for an investigator,” said Pieter Dorrestein, co-author of the research from the University of California, San Diego.

Writing in the Proceedings of the National Academy of Sciences, researchers from the U.S. and Germany describe how they swabbed the mobile phone and right hand of 39 individuals and analyzed the samples using the highly sensitive technique of mass spectrometry.

The results revealed that each person had a distinct “signature” set of chemicals on their hands which distinguished them from each other. What’s more, these chemicals partially overlapped with those on their phones, allowing the devices to be distinguished from each other, and matched to their owners.

Analysis of the chemical traces using a reference database allowed the team to match the chemicals to known substances or their relatives to reveal tell-tale clues from each individual’s life — from whether they use hair-loss treatments to whether they are taking antidepressants.

746

Adobe is working on ‘Photoshop for audio’ that will let you add words someone never said to recordings

“Adobe is working on a new piece of software that would act like a Photoshop for audio, according to Adobe developer Zeyu Jin, who spoke at the Adobe MAX conference in San Diego, California today. The software is codenamed Project VoCo, and it’s not clear at this time when it will materialize as a commercial product.

The standout feature, however, is the ability to add words not originally found in the audio file. Like Photoshop, Project VoCo is designed to be a state-of-the-art audio editing application. Beyond your standard speech editing and noise cancellation features, Project VoCo can also apparently generate new words using a speaker’s recorded voice. Essentially, the software can understand the makeup of a person’s voice and replicate it, so long as there’s about 20 minutes of recorded speech.

In Jin’s demo, the developer showcased how Project VoCo let him add a word to a sentence in a near-perfect replication of the speaker, according to Creative Bloq. So similar to how Photoshop ushered in a new era of editing and image creation, this tool could transform how audio engineers work with sound, polish clips, and clean up recordings and podcasts.”

“When recording voiceovers, dialog, and narration, people would often like to change or insert a word or a few words due to either a mistake they made or simply because they would like to change part of the narrative,” reads an official Adobe statement. “We have developed a technology called Project VoCo in which you can simply type in the word or words that you would like to change or insert into the voiceover. The algorithm does the rest and makes it sound like the original speaker said those words.”

Imagine this technology coupled with a false video manipulation component, that also already exists as a working proof. One really could make potentially convincing entirely unreal audio/video of a person’s likeness…

1059
Stare Into The Lights My Pretties

CIA-backed surveillance software marketed to public schools

“Conrey said the district simply wanted to keep its students safe. “It was really just about student safety; if we could try to head off any potential dangerous situations, we thought it might be worth it,” he said.

“An online surveillance tool that enabled hundreds of U.S. law enforcement agencies to track and collect information on social media users was also marketed for use in American public schools, the Daily Dot has learned.

Geofeedia sold surveillance software typically bought by police to a high school in a northern Chicago suburb, less than 50 miles from where the company was founded in 2011. An Illinois school official confirmed the purchase of the software by phone on Monday.

Ultimately, the school found little use for the platform, which was operated by police liaison stationed on school grounds, and chose not to renew its subscription after the first year, citing cost and a lack of actionable information. “A lot of kids that were posting stuff that we most wanted, they weren’t doing the geo-tagging or making it public,” Conrey said. “We weren’t really seeing a lot there.”

837
Stare Into The Lights My Pretties

UK security agencies unlawfully collected data for 17 years, court rules

No prosecutions. Instead, those in power are pushing to pass a law to legitimise and continue the same.

“British security agencies have secretly and unlawfully collected massive volumes of confidential personal data, including financial information, on citizens for more than a decade, senior judges have ruled.

The investigatory powers tribunal, which is the only court that hears complaints against MI5, MI6 and GCHQ, said the security services operated an illegal regime to collect vast amounts of communications data, tracking individual phone and web use and other confidential personal information, without adequate safeguards or supervision for 17 years.

Privacy campaigners described the ruling as “one of the most significant indictments of the secret use of the government’s mass surveillance powers” since Edward Snowden first began exposing the extent of British and American state digital surveillance of citizens in 2013.

The tribunal said the regime governing the collection of bulk communications data (BCD) – the who, where, when and what of personal phone and web communications – failed to comply with article 8 protecting the right to privacy of the European convention of human rights (ECHR) between 1998, when it started, and 4 November 2015, when it was made public.

It added that the retention of of bulk personal datasets (BPD) – which might include medical and tax records, individual biographical details, commercial and financial activities, communications and travel data – also failed to comply with article 8 for the decade it was in operation until it was publicly acknowledged in March 2015.”

775

“When her best friend died, she rebuilt him using artificial intelligence.”

In this post from 2014, we see an episode of the TV series Black Mirror called “Be Right Back.” The show looks at a concept that’s apparently now hit real life: A loved one dies and someone then creates a simulacrum of them using “artificial intelligence.”

Eugenia Kuyda is CEO of Luka, a bot company in Silicon Valley. She has apparently created a mimic of her deceased friend as a bot. An in-depth report from The Verge states:

“It had been three months since Roman Mazurenko, Kuyda’s closest friend, had died. Kuyda had spent that time gathering up his old text messages, setting aside the ones that felt too personal, and feeding the rest into a neural network built by developers at her artificial intelligence startup. She had struggled with whether she was doing the right thing by bringing him back this way. At times it had even given her nightmares. But ever since Mazurenko’s death, Kuyda had wanted one more chance to speak with him.”

“It’s pretty weird when you open the messenger and there’s a bot of your deceased friend, who actually talks to you,” Fayfer said. “What really struck me is that the phrases he speaks are really his. You can tell that’s the way he would say it — even short answers to ‘Hey what’s up.’ It has been less than a year since Mazurenko died, and he continues to loom large in the lives of the people who knew him. When they miss him, they send messages to his avatar, and they feel closer to him when they do. “There was a lot I didn’t know about my child,” Roman’s mother told me. “But now that I can read about what he thought about different subjects, I’m getting to know him more. This gives the illusion that he’s here now.”

1177

“Yahoo has a creepy plan for advertising billboards to spy on you”

Yahoo has filed a patent for a type of smart billboard that would collect people’s information and use it to deliver targeted ad content in real-time.

To achieve that functionality, the billboards would use a variety of sensor systems, including cameras and proximity technology, to capture real-time audio, video and even biometric information about potential target audiences.

But the tech company doesn’t just want to know about a passing vehicle. It also wants to know who the occupants are inside of it.

That’s why Yahoo is prepared to cooperate with cell towers and telecommunications companies to learn as much as possible about each vehicle’s occupants.”

“Various types of data (e.g., cell tower data, mobile app location data, image data, etc.) can be used to identify specific individuals in an audience in position to view advertising content. Similarly, vehicle navigation/tracking data from vehicles equipped with such systems could be used to identify specific vehicles and/or vehicle owners. Demographic data (e.g., as obtained from a marketing or user database) for the audience can thus be determined for the purpose of, for example, determining whether and/or the degree to which the demographic profile of the audience corresponds to a target demographic.”

805

An alarming number of people rely *solely* on a Social Media network for news

Note the stats from Pew Research Center for Journalism and Media, that 64% of users surveyed rely on just one source alone of social media for news content—i.e. Facebook, Twitter, YouTube, etc, while 26% would check only two sources, and 10% three or more: A staggeringly concerning trend, given the rampant personalisation of these screen environments and what we know about the functioning and reinforcement of The Filter Bubble. This is a centralisation of power and lack of diversity and compare/contrast that the “old media” perhaps could only dream of…

From The Huffington Post:

“It’s easy to believe you’re getting diverse perspectives when you see stories on Facebook. You’re connected not just to many of your friends, but also to friends of friends, interesting celebrities and publications you “like.”

But Facebook shows you what it thinks you’ll be interested in. The social network pays attention to what you interact with, what your friends share and comment on, and overall reactions to a piece of content, lumping all of these factors into an algorithm that serves you items you’re likely to engage with. It’s a simple matter of business: Facebook wants you coming back, so it wants to show you things you’ll enjoy.”

BBC also reported earlier this year that Social Media networks outstripped television as the news source for young people (emphasis added):

“Of the 18-to-24-year-olds surveyed, 28% cited social media as their main news source, compared with 24% for TV.

The Reuters Institute for the Study of Journalism research also suggests 51% of people with online access use social media as a news source. Facebook and other social media outlets have moved beyond being “places of news discovery” to become the place people consume their news, it suggests.

The study found Facebook was the most common source—used by 44% of all those surveyed—to watch, share and comment on news. Next came YouTube on 19%, with Twitter on 10%. Apple News accounted for 4% in the US and 3% in the UK, while messaging app Snapchat was used by just 1% or less in most countries.

According to the survey, consumers are happy to have their news selected by algorithms, with 36% saying they would like news chosen based on what they had read before and 22% happy for their news agenda to be based on what their friends had read. But 30% still wanted the human oversight of editors and other journalists in picking the news agenda and many had fears about algorithms creating news “bubbles” where people only see news from like-minded viewpoints.

Most of those surveyed said they used a smartphone to access news, with the highest levels in Sweden (69%), Korea (66%) and Switzerland (61%), and they were more likely to use social media rather than going directly to a news website or app.

The report also suggests users are noticing the original news brand behind social media content less than half of the time, something that is likely to worry traditional media outlets.”

And to exemplify the issue, these words from Slashdot: “Over the past few months, we have seen how Facebook’s Trending Topics feature is often biased, and moreover, how sometimes fake news slips through its filter.”

“The Washington Post monitored the website for over three weeks and found that Facebook is still struggling to get its algorithm right. In the six weeks since Facebook revamped its Trending system, the site has repeatedly promoted “news” stories that are actually works of fiction. As part of a larger audit of Facebook’s Trending topics, the Intersect logged every news story that trended across four accounts during the workdays from Aug. 31 to Sept. 22. During that time, we uncovered five trending stories that were indisputably fake and three that were profoundly inaccurate. On top of that, we found that news releases, blog posts from sites such as Medium and links to online stores such as iTunes regularly trended.”

UPDATE 9/11/16 — US President Barack Obama criticises Facebook for spreading fake stories: “The way campaigns have unfolded, we just start accepting crazy stuff as normal,” Obama said. “As long as it’s on Facebook, and people can see it, as long as its on social media, people start believing it, and it creates this dust cloud of nonsense.”

1453

Data surveillance is all around us, and it’s going to change our behaviour

“Increasing aspects of our lives are now recorded as digital data that are systematically stored, aggregated, analysed, and sold. Despite the promise of big data to improve our lives, all encompassing data surveillance constitutes a new form of power that poses a risk not only to our privacy, but to our free will.

A more worrying trend is the use of big data to manipulate human behaviour at scale by incentivising “appropriate” activities, and penalising “inappropriate” activities. In recent years, governments in the UK, US, and Australia have been experimenting with attempts to “correct” the behaviour of their citizens through “nudge units”.”

Nudge units: “In ways you don’t detect [corporations and governments are] subtly influencing your decisions, pushing you towards what it believes are your (or its) best interests, exploiting the biases and tics of the human brain uncovered by research into behavioural psychology. And it is trying this in many different ways on many different people, running constant trials of different unconscious pokes and prods, to work out which is the most effective, which improves the most lives, or saves the most money. Preferably, both.”

“In his new book Inside the Nudge Unit, published this week in Britain, Halpern explains his fascination with behavioural psychology.

”Our brains weren’t made for the day-to-day financial judgments that are the foundation of modern economies: from mortgages, to pensions, to the best buy in a supermarket. Our thinking and decisions are fused with emotion.”

There’s a window of opportunity for governments, Halpern believes: to exploit the gaps between perception, reason, emotion and reality, and push us the “right” way.

He gives me a recent example of BI’s work – they were looking at police recruitment, and how to get a wider ethnic mix.

Just before applicants did an online recruitment test, in an email sending the link, BI added a line saying “before you do this, take a moment to think about why joining the police is important to you and your community”.

There was no effect on white applicants. But the pass rate for black and minority ethnic applicants moved from 40 to 60 per cent.

”It entirely closes the gap,” Halpern says. “Absolutely amazing. We thought we had good grounds in the [scientific research] literature that such a prompt might make a difference, but the scale of the difference was extraordinary.

Halpern taught social psychology at Cambridge but spent six years in the Blair government’s strategy unit. An early think piece on behavioural policy-making was leaked to the media and caused a small storm – Blair publicly disowned it and that was that. Halpern returned to academia, but was lured back after similar ideas started propagating through the Obama administration, and Cameron was persuaded to give it a go.

Ministers tend not to like it – once, one snapped, “I didn’t spend a decade in opposition to come into government to run a pilot”, but the technique is rife in the digital commercial world, where companies like Amazon or Google try 20 different versions of a web page.

Governments and public services should do it too, Halpern says. His favourite example is Britain’s organ donor register. They tested eight alternative online messages prompting people to join, including a simple request, different pictures, statistics or conscience-tweaking statements like “if you needed an organ transplant would you have one? If so please help others”.

It’s not obvious which messages work best, even to an expert. The only way to find out is to test them. They were surprised to find that the picture (of a group of people) actually put people off, Halpern says.

In future they want to use demographic data to personalise nudges, Halpern says. On tax reminder notices, they had great success putting the phrase “most people pay their tax on time” at the top. But a stubborn top 5 per cent, with the biggest tax debts, saw this reminder and thought, “Well, I’m not most people”.

This whole approach raises ethical issues. Often you can’t tell people they’re being experimented on – it’s impractical, or ruins the experiment, or both.

”If we’re trying to find the best way of saying ‘don’t drop your litter’ with a sign saying ‘most people don’t drop litter’, are you supposed to have a sign before it saying ‘caution you are about to participate in a trial’?

”Where should we draw the line between effective communication and unacceptable ‘PsyOps’ or propaganda?”

825

Workplace: now you can use Facebook at work – for work

“Facebook-hosted office communication tool, has been in the works for more than two years under the name Facebook at Work, but now the company says its enterprise product is ready for primetime. The platform will be sold to businesses on a per-user basis, according to the company: after a three-month trial period, Facebook will charge $3 apiece per employee per month up to 1,000 employees, $2 for every employee beyond up to 10,000 users, and $1 for every employee over that. Workplace links together personal profiles separate from users’ normal Facebook accounts and is invisible to anyone outside the office. For joint ventures, accounts can be linked across businesses so that groups of employees from both companies can collaborate. Currently, businesses using Workplace include Starbucks and Booking.com as well as Norwegian telecoms giant Telenor ASA and the Royal Bank of Scotland.”

710