Resources

Digital Authoritarianism Is On the Rise Around the World, Report Warns

Internet freedom declined for a ninth consecutive year as governments around the world used social media to monitor citizens and manipulate elections, according to a new study that warned of creeping “digital authoritarianism.” Thirty-three of the 65 countries surveyed were found to have experienced worsening internet freedom since June 2018, compared with 16 that were found to have improving conditions. The study, conducted by Freedom House, a nonprofit human rights advocacy, said domestic disinformation had grown as a threat to democracy with populist leaders and their online supporters using the internet to distort political discussions. The organization found domestic interference in 26 of the 30 countries that held elections over the past year.

The report said internet freedom in the U.S. had declined, in large part because law enforcement and immigration agencies used social media to monitor people, though the country was still deemed “free.” China was dubbed the “worst abuser of internet freedom” for a fourth consecutive year as the government tightened information controls because of the 30th anniversary of the Tiananmen Square massacre and protests in Hong Kong. Noting that the biggest platforms were American, Freedom House called on the U.S. to lead in the effort to fix social media transparency and accountability. “This is the only way to stop the internet from becoming a Trojan horse for tyranny and oppression,” wrote Adrian Shahbaz, one of the authors of the report.

Spain and GitHub Are Blocking an App That Helped Protesters Organize

For the last month, hundreds of thousands of people have joined demonstrations in Spain to voice their objection to the jailing of Catalan separatist leaders and support Catalonian independence. As with almost all modern activist and public protest movements, activists are using social media and apps to communicate with and organize public actions. But this week, in a move that puts the Spanish government on par with censorship-heavy places like China and Russia, the country requested that Github block access to one of those apps, by revoking local access to its Github repository. Github, which is owned by Microsoft, complied with the order.

According to Spanish news outlet El Confidencial, last week the government ordered takedowns of websites and app made by Tsunami Democratic, an activist group organizing protests in the region. To try to keep access to the app download alive, Tsunami Democratic moved the .apk file to Github. But the government shut that down, too, blocking the site in Spain. Motherboard tested the download using a VPN, and the Github repo was blocked from Madrid.

Can Google Ads Change Your Beliefs?

There are advertisers in the digital marketing industry who want to find out how effective this new form of social engineering is. One of those advertisers is Google. Redirect Method was a Google-incubated project that used redirect ads to deradicalize would-be extremists. In the first eight weeks of 2016, some 320,000 people — all of whom were believed to harbor sympathy toward the Islamic State — clicked on ads designed to reflect an interest in extremist content. Instead of arriving at a page that supported their views, Islamic State sympathizers who clicked the ads found themselves directed to a playlist of videos debunking the terror group’s recruitment narratives. Most of the visitors stuck around. Together, they watched more than half a million minutes of video.

After the ISIS campaign ended, Google left behind a blueprint.

The blueprint shows, step by step, how you can create your own redirect ads to sway any belief or opinion — held by any Google user, anywhere in the world — of your choice. You don’t have to be a marketer with years of experience to do this. You just need to follow the instructions and put up a credit card (a few hundred bucks will suffice). Recently, I followed the blueprint and created a redirect campaign of my own… The goal of my first redirect campaign was to sway the ideology of suicidal people.

Nearly one in three apparently suicidal searchers who viewed his ad then dialed his hotline number — which then forwarded the calls to the National Suicide Prevention Lifeline. But he expressed surprise that Google “let me run the ads with no issue… I didn’t need qualifications to be a conduit of peoples’ fates.” He later tried creating another campaign for prospective school shooters, “but the conversion rates were low. These prospective shooters were reluctant to speak with someone.”

Yet one study found that more than 50% of people using Google couldn’t distinguish between ads and “organic” results on Google.

Do Google and Facebook Threaten Our ‘Ambient Privacy’?

Until recently, ambient privacy was a simple fact of life. Recording something for posterity required making special arrangements, and most of our shared experience of the past was filtered through the attenuating haze of human memory. Even police states like East Germany, where one in seven citizens was an informer, were not able to keep tabs on their entire population. Today computers have given us that power. Authoritarian states like China and Saudi Arabia are using this newfound capacity as a tool of social control. Here in the United States, we’re using it to show ads. But the infrastructure of total surveillance is everywhere the same, and everywhere being deployed at scale….

Because our laws frame privacy as an individual right, we don’t have a mechanism for deciding whether we want to live in a surveillance society. Congress has remained silent on the matter, with both parties content to watch Silicon Valley make up its own rules. The large tech companies point to our willing use of their services as proof that people don’t really care about their privacy. But this is like arguing that inmates are happy to be in jail because they use the prison library. Confronted with the reality of a monitored world, people make the rational decision to make the best of it.

That is not consent…

Our discourse around privacy needs to expand to address foundational questions about the role of automation: To what extent is living in a surveillance-saturated world compatible with pluralism and democracy? What are the consequences of raising a generation of children whose every action feeds into a corporate database? What does it mean to be manipulated from an early age by machine learning algorithms that adaptively learn to shape our behavior? That is not the conversation Facebook or Google want us to have. Their totalizing vision is of a world with no ambient privacy and strong data protections, dominated by the few companies that can manage to hoard information at a planetary scale. They correctly see the new round of privacy laws as a weapon to deploy against smaller rivals, further consolidating their control over the algorithmic panopticon.

As Facebook Raised a Privacy Wall, It Carved an Opening for Tech Giants

Internal documents show that the social network gave Microsoft, Amazon, Spotify and others far greater access to people’s data than it has disclosed.

For years, Facebook gave some of the world’s largest technology companies more intrusive access to users’ personal data than it has disclosed, effectively exempting those business partners from its usual privacy rules, according to internal records and interviews.

The special arrangements are detailed in hundreds of pages of Facebook documents obtained by The New York Times. The records, generated in 2017 by the company’s internal system for tracking partnerships, provide the most complete picture yet of the social network’s data-sharing practices. They also underscore how personal data has become the most prized commodity of the digital age, traded on a vast scale by some of the most powerful companies in Silicon Valley and beyond.

Facebook allowed Microsoft’s Bing search engine to see the names of virtually all Facebook users’ friends without consent, the records show, and gave Netflix and Spotify the ability to read Facebook users’ private messages.

The social network permitted Amazon to obtain users’ names and contact information through their friends, and it let Yahoo view streams of friends’ posts as recently as this summer, despite public statements that it had stopped that type of sharing years earlier.

Facebook has been reeling from a series of privacy scandals, set off by revelations in March that a political consulting firm, Cambridge Analytica, improperly used Facebook data to build tools that aided President Trump’s 2016 campaign. Acknowledging that it had breached users’ trust, Facebook insisted that it had instituted stricter privacy protections long ago. Mark Zuckerberg, the chief executive, assured lawmakers in April that people “have complete control” over everything they share on Facebook.

[Facebook’s strategy in times of crisis: delay, deny and deflect.]

Facebook began forming data partnerships when it was still a relatively young company. Mr. Zuckerberg was determined to weave Facebook’s services into other sites and platforms, believing it would stave off obsolescence and insulate Facebook from competition. Every corporate partner that integrated Facebook data into its online products helped drive the platform’s expansion, bringing in new users, spurring them to spend more time on Facebook and driving up advertising revenue. At the same time, Facebook got critical data back from its partners.

The partnerships were so important that decisions about forming them were vetted at high levels, sometimes by Mr. Zuckerberg and Sheryl Sandberg, the chief operating officer, Facebook officials said. While many of the partnerships were announced publicly, the details of the sharing arrangements typically were confidential.

Facebook also allowed Spotify, Netflix and the Royal Bank of Canada to read, write and delete users’ private messages, and to see all participants on a thread — privileges that appeared to go beyond what the companies needed to integrate Facebook into their systems, the records show. Facebook acknowledged that it did not consider any of those three companies to be service providers. Spokespeople for Spotify and Netflix said those companies were unaware of the broad powers Facebook had granted them. A spokesman for Netflix said Wednesday that it had used the access only to enable customers to recommend TV shows and movies to their friends.

A Royal Bank of Canada spokesman disputed that the bank had had any such access. (Aspects of some sharing partnerships, including those with the Royal Bank of Canada and Bing, were first reported by The Wall Street Journal.)

Spotify, which could view messages of more than 70 million users a month, still offers the option to share music through Facebook Messenger. But Netflix and the Canadian bank no longer needed access to messages because they had deactivated features that incorporated it.

These were not the only companies that had special access longer than they needed it. Yahoo, The Times and others could still get Facebook users’ personal information in 2017.

Yahoo could view real-time feeds of friends’ posts for a feature that the company had ended in 2012. A Yahoo spokesman declined to discuss the partnership in detail but said the company did not use the information for advertising. The Times — one of nine media companies named in the documents — had access to users’ friend lists for an article-sharing application it had discontinued in 2011. A spokeswoman for the news organization said it was not obtaining any data.

Facebook’s internal records also revealed more about the extent of sharing deals with over 60 makers of smartphones, tablets and other devices, agreements first reported by The Times in June.

Facebook empowered Apple to hide from Facebook users all indicators that its devices were asking for data. Apple devices also had access to the contact numbers and calendar entries of people who had changed their account settings to disable all sharing, the records show.

Apple officials said they were not aware that Facebook had granted its devices any special access. They added that any shared data remained on the devices and was not available to anyone other than the users.

AI Mistakes Ad On a Bus For an Actual CEO, Then Publicly Shames Them For ‘Jaywalking’

Since last year, many Chinese cities have cracked down on jaywalking by investing in facial recognition systems and AI-powered surveillance cameras. Jaywalkers are identified and shamed by displaying their photographs on large public screens… Developments are also underway to engage the country’s mobile network operators and social media platforms, such as Tencent Holdings’ WeChat and Sina Weibo, to establish a system in which offenders will receive personal text messages as soon as they are caught violating traffic rules….

Making a compelling case for change is the recent experience of Dong Mingzhu, chairwoman of China’s biggest maker of air conditioners Gree Electric Appliances, who found her face splashed on a huge screen erected along a street in the port city of Ningbo… That artificial intelligence-backed surveillance system, however, erred in capturing Dong’s image on Wednesday from an advertisement on the side of a moving bus. The traffic police in Ningbo, a city in the eastern coastal province of Zhejiang, were quick to recognise the mistake, writing in a post on microblog Sina Weibo on Wednesday that it had deleted the snapshot. It also said the surveillance system would be completely upgraded to cut incidents of false recognition in future.

When Tech Knows You Better Than You Know Yourself

Algorithms are kind of running where 2 billion people spend their time. Seventy percent of what people watch on YouTube is driven by recommendations from the algorithm. People think that what you’re watching on YouTube is a choice. People are sitting there, they sit there, they think, and then they choose. But that’s not true. Seventy percent of what people are watching is the recommended videos on the right hand side, which means 70 percent of 1.9 billion users, that’s more than the number of followers of Islam, about the number followers of Christianity, of what they’re looking at on YouTube for 60 minutes a day—that’s the average time people spend on YouTube. So you got 60 minutes, and 70 percent is populated by a computer. The machine is out of control.

Universal Basic Income, Silicon Valley’s push for our further enslavement

Douglas Rushkoff, long-time open source advocate (and currently a professor of Digital Economics at the City University of New York, Queens College), is calling Universal Basic Incomes “no gift to the masses, but a tool for our further enslavement.”

Uber’s business plan, like that of so many other digital unicorns, is based on extracting all the value from the markets it enters. This ultimately means squeezing employees, customers, and suppliers alike in the name of continued growth. When people eventually become too poor to continue working as drivers or paying for rides, UBI supplies the required cash infusion for the business to keep operating. When it’s looked at the way a software developer would, it’s clear that UBI is really little more than a patch to a program that’s fundamentally flawed. The real purpose of digital capitalism is to extract value from the economy and deliver it to those at the top. If consumers find a way to retain some of that value for themselves, the thinking goes, you’re doing something wrong or “leaving money on the table.”

Walmart perfected the softer version of this model in the 20th century. Move into a town, undercut the local merchants by selling items below cost, and put everyone else out of business. Then, as sole retailer and sole employer, set the prices and wages you want. So what if your workers have to go on welfare and food stamps. Now, digital companies are accomplishing the same thing, only faster and more completely…. Soon, consumers simply can’t consume enough to keep the revenues flowing in. Even the prospect of stockpiling everyone’s data, like Facebook or Google do, begins to lose its allure if none of the people behind the data have any money to spend. To the rescue comes UBI.

The policy was once thought of as a way of taking extreme poverty off the table. In this new incarnation, however, it merely serves as a way to keep the wealthiest people (and their loyal vassals, the software developers) entrenched at the very top of the economic operating system. Because of course, the cash doled out to citizens by the government will inevitably flow to them… Under the guise of compassion, UBI really just turns us from stakeholders or even citizens to mere consumers. Once the ability to create or exchange value is stripped from us, all we can do with every consumptive act is deliver more power to people who can finally, without any exaggeration, be called our corporate overlords… if Silicon Valley’s UBI fans really wanted to repair the economic operating system, they should be looking not to universal basic income but universal basic assets, first proposed by Institute for the Future’s Marina Gorbis… As appealing as it may sound, UBI is nothing more than a way for corporations to increase their power over us, all under the pretense of putting us on the payroll. It’s the candy that a creep offers a kid to get into the car or the raise a sleazy employer gives a staff member who they’ve sexually harassed. It’s hush money.

Rushkoff’s conclusion? “Whether its proponents are cynical or simply naive, UBI is not the patch we need.”

As Google Maps Renames Neighbourhoods, Residents Fume

For decades, the district south of downtown and alongside San Francisco Bay here was known as either Rincon Hill, South Beach or South of Market. This spring, it was suddenly rebranded on Google Maps to a name few had heard: the East Cut. The peculiar moniker immediately spread digitally, from hotel sites to dating apps to Uber, which all use Google’s map data. The name soon spilled over into the physical world, too. Real-estate listings beckoned prospective tenants to the East Cut. And news organizations referred to the vicinity by that term.

“It’s degrading to the reputation of our area,” said Tad Bogdan, who has lived in the neighborhood for 14 years. In a survey of 271 neighbors that he organized recently, he said, 90 percent disliked the name. The swift rebranding of the roughly 170-year-old district is just one example of how Google Maps has now become the primary arbiter of place names. With decisions made by a few Google cartographers, the identity of a city, town or neighborhood can be reshaped, illustrating the outsize influence that Silicon Valley increasingly has in the real world.

Social Media Manipulation Rising Globally, New Oxford Report Warns

A new report from Oxford University found that manipulation of public opinion over social media platforms is growing at a large scale, despite efforts to combat it. “Around the world, government agencies and political parties are exploiting social media platforms to spread junk news and disinformation, exercise censorship and control, and undermine trust in media, public institutions and science.”

“The number of countries where formally organized social media manipulation occurs has greatly increased, from 28 to 48 countries globally,” says Samantha Bradshaw, co-author of the report. “The majority of growth comes from political parties who spread disinformation and junk news around election periods. There are more political parties learning from the strategies deployed during Brexit and the U.S. 2016 Presidential election: more campaigns are using bots, junk news, and disinformation to polarize and manipulate voters.”

This is despite efforts by governments in many democracies introducing new legislation designed to combat fake news on the internet. “The problem with this is that these ‘task forces’ to combat fake news are being used as a new tool to legitimize censorship in authoritarian regimes,” says Professor Phil Howard, co-author and lead researcher on the OII’s Computational Propaganda project. “At best, these types of task forces are creating counter-narratives and building tools for citizen awareness and fact-checking.” Another challenge is the evolution of the mediums individuals use to share news and information. “There is evidence that disinformation campaigns are moving on to chat applications and alternative platforms,” says Bradshaw. “This is becoming increasingly common in the Global South, where large public groups on chat applications are more popular.”

How the “Math Men” Overthrew the “Mad Men”

Once, Mad Men ruled advertising. They’ve now been eclipsed by Math Men — the engineers and data scientists whose province is machines, algorithms, pureed data, and artificial intelligence. Yet Math Men are beleaguered, as Mark Zuckerberg demonstrated when he humbled himself before Congress, in April. Math Men’s adoration of data — coupled with their truculence and an arrogant conviction that their ‘science’ is nearly flawless — has aroused government anger, much as Microsoft did two decades ago.

The power of Math Men is awesome. Google and Facebook each has a market value exceeding the combined value of the six largest advertising and marketing holding companies. Together, they claim six out of every ten dollars spent on digital advertising, and nine out of ten new digital ad dollars. They have become more dominant in what is estimated to be an up to two-trillion-dollar annual global advertising and marketing business. Facebook alone generates more ad dollars than all of America’s newspapers, and Google has twice the ad revenues of Facebook.

Why the Facebook ‘scandal’ impacts you more than you think

It’s not just the data you choose to share.

By now we all know the story: Facebook allowed apps on its social media platform which enabled a shady outfit called Cambridge Analytica to scrape the profiles of 87 million users, in order to serve up targeted ads to benefit the Trump election campaign in 2016.  More than 300,000 Australian users of Facebook were caught up in the data harvesting.

But serving up ads in a foreign election campaign is not the whole story.  Facebook, and other companies involved in data mining, are invading our privacy and harming us economically and socially, in ways that are only just starting to become clear.

And it’s not just the data you choose to share. The information you post is not the whole story.  It’s only the tip of the iceberg of data that Facebook has collected about you.

Every time you go online you leave a trail of digital breadcrumbs.  Facebook has been busily sweeping up those breadcrumbs, and using them to categorise and profile you.  Facebook obviously knows when you click on a Facebook ‘like’ button; but also, unless a web developer has gone out of their way to find tools to block them (as we have done for our Salinger Privacy blog), Facebook knows every time you simply look at a website that has a Facebook ‘like’ button somewhere on it.

So if you only post or ‘like’ stories about inspirational mountain climbers and funny cat videos, but also do things online that you don’t share with your family, friends or work colleagues (like looking at stories about abortion or dealing with infidelity, Googling how to manage anxiety or erectile dysfunction, whingeing about your employer in a chatroom, or spending hours reviewing dating profiles, gambling or shopping obsessively for shoes)  — Facebook has you pegged anyway.

Plus, Facebook obtains data from other sources which know about your offline purchases, to build an even richer picture of who you really are.  And of course, Facebook may have access to your address book, your location history, the contents of your private messages, and depending on your brand of phone, possibly even a history of your phone calls and text messages.

All that information is used to draw inferences and assumptions about your preferences, and predict your likely behaviour.  The results are then used to categorise, profile and ultimately target you, in a process usually described as ‘online behavioural advertising’.

It’s not ‘just ads’

The objective of online behavioural advertising is to predict your purchasing interests and drive a purchase decision.  So far, the same as any other advertising.  But online, the implications for us as individuals are much greater.

Facebook’s promise to advertisers is that it can show their ad to exactly who the advertiser wants, and exclude everybody else.

However, by allowing exclusion, the platform also allows discrimination.  Facebook has been caught allowing advertisers to target — and exclude — people on the basis of their ‘ethnic affinity’, amongst other social, demographic, racial and religious characteristics.  So a landlord with an ad for rental housing could prevent people profiled as ‘single mothers’ from ever seeing their ad.  An employer could prevent people identifying as Jewish from seeing a job ad.  A bank could prevent people categorised as African Americans from seeing an ad for a home loan.

Existing patterns of social exclusion, economic inequality and discrimination are further entrenched by micro-targeted advertising, which is hidden from public view and regulatory scrutiny.

Data boy. Mark Zuckerberg testifies in Washington. Image: Getty.

Predictive analytics can narrow or alter your life choices

Once we move beyond straight-up advertising and into predictive analytics, the impact on individual autonomy becomes more acute.  Big Data feeds machine learning, which finds patterns in the data, from which new rules (algorithms) are designed.  Algorithms predict how a person will behave, and suggest how they should be treated.

Algorithms can lead to price discrimination, like surge pricing based on Uber knowing how much phone battery life you have left.  Or market exclusion, like Woolworths only offering car insurance to customers it has decided are low risk, based on an assessment of the groceries they buy.

Banks have been predicting the risk of a borrower defaulting on a loan for decades, but now algorithms are also used to determine who to hire, predict when a customer is pregnant, and deliver targeted search results to influence how you vote.

Algorithms are also being used to predict the students at risk of failure, the prisoners at risk of re-offending, and who is at risk of suicide and then launching interventions accordingly.  However, even leaving aside the accuracy of those predictions, interventions are not necessarily well-intentioned.  It was revealed last year that Australian Facebook executives were touting to advertisers their ability to target psychologically vulnerable teenagers. 

Automated decision-making diminishes our autonomy, by narrowing or altering our market and life choices, in ways that are not clear to us.  People already in a position of economic or social disadvantage face the additional challenge of trying to disprove or beat an invisible algorithm.

In a predictive and pre-emptive world, empathy, forgiveness, rehabilitation, redemption, individual dignity, autonomy and free will are programmed out of our society.

Fiddling with users’ privacy settings on Facebook won’t fix anything.  If we want our lives to be ruled by human values and individual dignity, instead of by machines fed on questionable data, we need robust, enforced and globally effective privacy laws.

A new European privacy law commences later this month.  The obligations include that businesses and governments must offer understandable explanations of how their algorithms work, and allow people to seek human review of automated decision-making.  This is a step in the right direction, which Australia, the US and the rest of the world should follow.

Facebook Really Wants You to Come Back

The social network is getting aggressive with people who don’t log in often, working to keep up its engagement numbers.

It’s been about a year since Rishi Gorantala deleted the Facebook app from his phone, and the company has only gotten more aggressive in its emails to win him back. The social network started out by alerting him every few days about friends that had posted photos or made comments—each time inviting him to click a link and view the activity on Facebook. He rarely did.

Then, about once a week in September, he started to get prompts from a Facebook security customer-service address. “It looks like you’re having trouble logging into Facebook,” the emails would say. “Just click the button below and we’ll log you in. If you weren’t trying to log in, let us know.” He wasn’t trying. But he doesn’t think anybody else was, either.

“The content of mail they send is essentially trying to trick you,” said Gorantala, 35, who lives in Chile. “Like someone tried to access my account so I should go and log in.”

How Facebook’s Political Unit Enables the Dark Art of Digital Propaganda

Under fire for Facebook Inc.’s role as a platform for political propaganda, co-founder Mark Zuckerberg has punched back, saying his mission is above partisanship. “We hope to give all people a voice and create a platform for all ideas,” Zuckerberg wrote in September after President Donald Trump accused Facebook of bias. Zuckerberg’s social network is a politically agnostic tool for its more than 2 billion users, he has said. But Facebook, it turns out, is no bystander in global politics. What he hasn’t said is that his company actively works with political parties and leaders including those who use the platform to stifle opposition — sometimes with the aid of “troll armies” that spread misinformation and extremist ideologies.

The initiative is run by a little-known Facebook global government and politics team that’s neutral in that it works with nearly anyone seeking or securing power. The unit is led from Washington by Katie Harbath, a former Republican digital strategist who worked on former New York Mayor Rudy Giuliani’s 2008 presidential campaign. Since Facebook hired Harbath three years later, her team has traveled the globe helping political clients use the company’s powerful digital tools. In some of the world’s biggest democracies — from India and Brazil to Germany and the U.K. — the unit’s employees have become de facto campaign workers. And once a candidate is elected, the company in some instances goes on to train government employees or provide technical assistance for live streams at official state events.

“Are you happy now? The uncertain future of emotion analytics”

Elise Thomas writes at Hopes & Fears:

“Right now, in a handful of computing labs scattered across the world, new software is being developed which has the potential to completely change our relationship with technology. Affective computing is about creating technology which recognizes and responds to your emotions. Using webcams, microphones or biometric sensors, the software uses a person’s physical reactions to analyze their emotional state, generating data which can then be used to monitor, mimic or manipulate that person’s emotions.”

Corporations spend billions each year trying to build “authentic” emotional connections to their target audiences. Marketing research is one of the most prolific research fields around, conducting thousands of studies on how to more effectively manipulate consumers’ decision-making. Advertisers are extremely interested in affective computing and particularly in a branch known as emotion analytics, which offers unprecedented real-time access to consumers’ emotional reactions and the ability to program alternative responses depending on how the content is being received.

For example, if two people watch an advertisement with a joke and only one person laughs, the software can be programmed to show more of the same kind of advertising to the person who laughs while trying different sorts of advertising on the person who did not laugh to see if it’s more effective. In essence, affective computing could enable advertisers to create individually-tailored advertising en masse.”

“Say 15 years from now a particular brand of weight loss supplements obtains a particular girl’s information and locks on. When she scrolls through her Facebook, she sees pictures of rail-thin celebrities, carefully calibrated to capture her attention. When she turns on the TV, it automatically starts on an episode of “The Biggest Loser,” tracking her facial expressions to find the optimal moment for a supplement commercial. When she sets her music on shuffle, it “randomly” plays through a selection of the songs which make her sad. This goes on for weeks.

Now let’s add another layer. This girl is 14, and struggling with depression. She’s being bullied in school. Having become the target of a deliberate and persistent campaign by her technology to undermine her body image and sense of self-worth, she’s at risk of making some drastic choices.”

Google forming ‘smart cities’

“An ambitious project to blanket New York and London with ultrafast Wi-Fi via so-called “smart kiosks,” which will replace obsolete public telephones, are the work of a Google-backed startup.

Each kiosk is around nine feet high and relatively flat. Each flat side houses a big-screen display that pays for the whole operation with advertising.

Each kiosk provides free, high-speed Wi-Fi for anyone in range. By selecting the Wi-Fi network at one kiosk, and authenticating with an email address, each user will be automatically connected to every other LinkNYC kiosk they get within range of. Eventually, anyone will be able to walk around most of the city without losing the connection to these hotspots.

Wide-angle cameras on each side of the kiosks point up and down the street and sidewalk, approximating a 360-degree view. If a city wants to use those cameras and sensors for surveillance, it can.

Over the next 15 years, the city will go through the other two phases, where sensor data will be processed by artificial intelligence to gain unprecedented insights about traffic, environment and human behavior and eventually use it to intelligently re-direct traffic and shape other city functions.”

Stare Into The Lights My Pretties

The data analytics company Cambridge Analytica

The Guardian is running an article about a ‘mysterious’ big-data analytics company called Cambridge Analytica and its activities with SCL Group—a 25-year-old military psyops company in the UK later bought by “secretive hedge fund billionaire” Robert Mercer. In the article, a former employee calls it “this dark, dystopian data company that gave the world Trump.”

Mercer, with a background in computer science is alleged to be at the centre of a multimillion-dollar propaganda network.

“Facebook was the source of the psychological insights that enabled Cambridge Analytica to target individuals. It was also the mechanism that enabled them to be delivered on a large scale. The company also (perfectly legally) bought consumer datasets — on everything from magazine subscriptions to airline travel — and uniquely it appended these with the psych data to voter files… Finding “persuadable” voters is key for any campaign and with its treasure trove of data, Cambridge Analytica could target people high in neuroticism, for example, with images of immigrants “swamping” the country.

The key is finding emotional triggers for each individual voter. Cambridge Analytica worked on campaigns in several key states for a Republican political action committee. Its key objective, according to a memo the Observer has seen, was “voter disengagement” and “to persuade Democrat voters to stay at home”… In the U.S., the government is bound by strict laws about what data it can collect on individuals. But, for private companies anything goes.”

Facebook: Cracking the Code (2017)

“What’s on your mind?” It’s the friendly Facebook question which lets you share how you’re feeling. It’s also the question that unlocks the details of your life and helps turn your thoughts into profits.

Facebook has the ability to track much of your browsing history, even when you’re not logged on, and even if you aren’t a member of the social network at all. This is one of the methods used to deliver targeted advertising and ‘news’ to your Facebook feed. This is why you are unlikely to see anything that challenges your world view.

This feedback loop is fuelling the rise and power of ‘fake news’. “We’re seeing news that’s tailored ever more tightly towards those kinds of things that people will click on, and will share, rather than things that perhaps are necessarily good for them”, says one Media Analyst.

This information grants huge power to those with access to it. Republican Party strategist Patrick Ruffini says, “What it does give us is much greater level of certainty and granularity and precision down to the individual voter, down to the individual precinct about how things are going to go”. Resultantly, former Facebook journalist, Adam Schrader thinks that there’s “a legitimate argument to this that Facebook influenced the election, the United States Election results.

Stare Into The Lights My Pretties

Children as young as 13 are attending ‘smartphone rehab’

Children refusing to put down their phones is a common flashpoint in many homes, with a third of British children aged 12 to 15 admitting they do not have a good balance between screen time and other activities.

But in the US, the problem has become so severe for some families that children as young as 13 are being treated for digital technology addiction.

One ‘smartphone rehab’ centre near Seattle has started offering residential “intensive recovery programs” for teenagers who have trouble controlling their use of electronic devices.

The Restart Life Centre says parents have been asking it to offer courses of treatment to their children for more than eight years.

Hilarie Cash, the Centre’s founder, told Sky News smartphones, tablets and other mobile devices can be so stimulating and entertaining that they “override all those natural instincts that children actually have for movement and exploration and social interaction”.

Child psychotherapist Julie Lynn Evans, who has worked with hospitals, schools and families for 25 years, said her workload has significantly increased since the use of smartphones became widespread among young people.

“It’s a simplistic view, but I think it is the ubiquity of broadband and smartphones that has changed the pace and the power and the drama of mental illness in young people,” she told The Telegraph.

A ComRes poll of more than 1,000 parents of children aged under 18, published in September 2015, found 47 per cent of parents said they thought their children spent too much time in front of screens, with 43 per cent saying this amounts to an emotional dependency.”

Stare Into The Lights My Pretties

The Internet of Things is a surveillance nightmare

… or a dream come true for those in power. And those in power are the same entities pushing IoT technologies.

A little background reading about JTRIG from the Snowden documents is helpful. It’s the modern-day equivalent of the Zersetzung—the special unit of the Stasi that was used to attack, repress and sabotage political opponents. A power greatly expanded with a society driven by IoT.

Full article from Daily Dot:

“In 2014, security guru Bruce Schneier said, “Surveillance is the business model of the Internet. We build systems that spy on people in exchange for services. Corporations call it marketing.” The abstract and novel nature of these services tends to obscure our true relationship to companies like Facebook or Google. As the old saying goes, if you don’t pay for a product, you are the product.

But what happens when the Internet stops being just “that fiddly thing with a mouse” and becomes “the real world”? Surveillance becomes the business model of everything, as more and more companies look to turn the world into a collection of data points.

If we truly understood the bargain we were making when we give up our data for free or discounted services, would we still sign on the dotted line (or agree to the Terms and Conditions)? Would we still accept constant monitoring of our driving habits in exchange for potential insurance breaks, or allow our energy consumption to be uploaded into the cloud in exchange for “smart data” about it?

Nowhere is our ignorance of the trade-offs greater, or the consequences more worrisome, than our madcap rush to connect every toaster, fridge, car, and medical device to the Internet.

Welcome to the Internet of Things, what Schneier calls “the World Size Web,” already growing around you as we speak, which creates such a complete picture of our lives that Dr. Richard Tynan of Privacy International calls them “doppelgängers”—mirror images of ourselves built on constantly updated data. These doppelgängers live in the cloud, where they can easily be interrogated by intelligence agencies. Nicholas Weaver, a security researcher at University of California, Berkeley, points out that “Under the FISA Amendments Act 702 (aka PRISM), the NSA can directly ask Google for any data collected on a valid foreign intelligence target through Google’s Nest service, including a Nest Cam.” And that’s just one, legal way of questioning your digital doppelgänger; we’ve all heard enough stories about hacked cloud storage to be wary of trusting our entire lives to it.

 
But with the IoT, the potential goes beyond simple espionage, into outright sabotage. Imagine an enemy that can remotely disable the brakes in your car, or (even more subtly) give you food poisoning by hacking your fridge. That’s a new kind of power. “The surveillance, the interference, the manipulation … the full life cycle is the ultimate nightmare,” says Tynan.

The professional spies agree that the IoT changes the game. “‘Transformational’ is an overused word, but I do believe it properly applies to these technologies,” then CIA Director David Petraeus told a 2012 summit organized by the agency’s venture capital firm, In-Q-Tel, “particularly to their effect on clandestine tradecraft,” according to Wired.

Clandestine tradecraft is not about watching, but about interfering. Take, for example, the Joint Threat Research Intelligence Group (JTRIG), the dirty tricks division of GCHQ, the British intelligence agency. As the Snowden documents reveal, JTRIG wants to create “Cyber Magicians” who can “make something happen in the real…world,” including ruining business deals, intimidating activists, and sexual entrapment (“honeypots”). The documents show that JTRIG operatives will ignore international law to achieve their goals, which are not about fighting terrorism, but, in fact, targeting individuals who have not been charged with or convicted of any crime.

The Internet of Things “is a JTRIG wet dream,” says security researcher Rob Graham. But you don’t have to be a spy to take advantage of the IoT. Thanks to widespread security vulnerabilities in most IoT devices, almost anyone can take advantage of it. That means cops, spies, gangsters, anyone with the motivation and resources—but probably bored teenagers as well. “I can take any competent computer person and take them from zero to Junior Hacker 101 in a weekend,” says security researcher Dan Tentler. The security of most IoT devices—including home IoT, but also smart cities, power plants, gas pipelines, self-driving cars, and medical devices—is laughably bad. “The barrier to entry is not very tall,” he says, “especially when what’s being released to consumers is so trivial to get into.”

That makes the IoT vulnerable—our society vulnerable—to any criminal with a weekend to spend learning how to hack. “When we talk about vulnerabilities in computers…people are using a lot of rhetoric in the abstract,” says Privacy International’s Tynan. “What we really mean is, vulnerable to somebody. That somebody you’re vulnerable to is the real question.”

“They’re the ones with the power over you,” he added. That means intelligence agencies, sure, but really anyone with the time and motivation to learn how to hack. And, as Joshua Corman of I Am the Cavalry, a concerned group of security researchers, once put it, “There are as many motivations to hacking as there are motivations in the human condition. Hacking is a form of power.”

The authorities want that power; entities like JTRIG, the NSA, the FBI and the DOJ want to be able to not just surveil but also to disrupt, to sabotage, to interfere. Right now the Bureau wants to force Apple to create the ability to deliver backdoored software updates to iPhones, allowing law enforcement access to locally stored, encrypted data. Chris Soghoian, a technologist at the ACLU, tweeted, “If DOJ get what they want in this Apple case, imagine the surveillance assistance they’ll be able to force from Internet of Things companies.”

“The notion that there are legal checks and balances in place is a fiction,” Tynan says. “We need to rely more on technology to increase the hurdles required. For the likes of JTRIG to take the massive resources of the U.K. state and focus them on destroying certain individuals, potentially under flimsy pretenses—I just can’t understand the mentality of these people.”

Defending ourselves in this new, insecure world is difficult, perhaps impossible. “If you go on the Internet, it’s a free-for-all,” Tentler says. “Despite the fact that we have these three-letter agencies, they’re not here to help us; they’re not our friends. When the NSA and GCHQ learn from the bad guys and use those techniques on us, we should be worried.”

If the Internet is a free-for-all, and with the Internet of Things we’re putting the entire world on the Internet, what does that make us?

“Fish in a barrel?”