Resources

Stare Into The Lights My Pretties

Our phones make us feel like social-media activists, but they’re actually turning us into bystanders

On April 9, 2017, a video of a man being dragged off a United Airlines flight was posted on the internet and went viral. But I don’t need to tell you that. Each of your most outspoken Facebook friends probably posted about the event, highlighting the aspects of it that best reinforced their worldview. The incident was covered all over American media and even sparked outrage in China.

The collective focus may have now moved on to its next source of outrage, but there was something that only a few people noticed in the moment: a plane full of quiet passengers. Other than one woman screaming, hardly anyone else on the plane seemed bothered enough by what was happening to raise a ruckus. This calm scene is a rather unlikely precursor to the uproar that unfolded hours later on Facebook and Twitter.

Instead of intervening in the assault, the passengers stoically took out their cameraphones and pointed them toward David Dao, whose body was dragged along the aisle of the airplane, glasses askew, face bloody, and belly exposed. Their immediate response was not to speak out against the outrageousness of what was going on, but to create an instant digital record of the incident.

The act of recording a violent event but staying silent is a modern manifestation of the bystander effect. The bystander effect occurs when people refrain from intervening in an emergency situation because there are other people around. Psychologists Bibb Latané and John Darley, who first demonstrated the bystander effect, attributed this phenomenon to two factors: a perceived diffusion of responsibility (thinking that someone else in the group will help) and social influence (where observers see the inaction of the group as evidence that there is no reason to intervene).

Our cameraphones may make us feel like social-media activists, but when we’re recording an event instead of intervening, we’re actually just real-world bystanders. There is a gulf of dissonance between what we publicly declare as our values—online or otherwise—and how we act.

In the past few years, there have been scores of videos depicting abuse that have been recorded and then disseminated online. In New Jersey in 2014, people watched and recorded as a woman was punched and kicked by a co-worker. (The only one who said anything was her 2-year-old child, who knew, naturally, to help.) In Philadelphia in 2016, a man was beating and punching a woman in the streets while an observer videotaped the event. Even without violence, the temptation to be a recording bystander prevails. Take the case of a 2013 fire in Pincourt, Canada, where observers recorded the house burning to the ground from all angles—but nobody called the fire station.

To prevent a culture of disembodied bystanders, we must learn to better asses the appropriate actions when we’re in a situation that demands immediate attention. In doing so, we hopefully transcend the idea that recording an event is a replacement for action.

Sam Gregory is a program director at WITNESS, a global organization that incorporates video technology into human-rights advocacy. The goal of Gregory’s primary project, Mobil-Eyes-Us, is to find ways to translate “co-presence” in to action. “In these types of events, people do freeze,” Gregory says. “The goal is to get over the freeze reaction.”

Filming events doesn’t relinquish our moral responsibility to intervene, but Gregory believes it’s “a step up from the Kitty Genovese incident,” which was an infamous 1964 stabbing in Queens, New York that 38 neighbors observed over a half hour, but none of them called the police or stepped in to intervene. If those 38 people lived in an age of smartphones, you can safely bet what a large portion of them would be doing.

Gregory says the idea of his project is to develop “witnessing literacy:” a repertoire of actions people can take in order to prevent unethical witnessing. To that end, the WITNESS website has abundant resources and guides, from teaching observers how to capture and preserve video as evidence to how to protect your identity on Youtube. The organization has also produced a mini-guide to capturing ethical footage and a video showing how to share the United Airlines video in a way that would protect the victim, David Dao:

This said, documenting an event is only a viable contribution to an inclement situation if it is then used in an ethical manner; it’s not the recording that matters, it’s what you do with it. For example, a video of an assault on your phone helps no one if it’s not formally filed to the police or uploaded to the internet in an effective, ethical manner. And with all that effort, wouldn’t it have been better to try and pipe-up in the moment? (If all else fails, you might also try to sing, which is what this one brave woman did to fend off a man harassing a woman on public transport.)

Viral videos that incite outrage and prod at our sense of justice demonstrate both the difficulty and necessity of acting in accordance with our values. We argue so much online about the actions of people who we do not know and will never meet, and this takes time away from looking at our own actions and preparing ourselves to act better in similar situations. As we thank the one woman on the plane who dared to speak up on the United flight, we should consider what else productive protest looks like so that each of us has a repertoire of counter-violent actions to take.

For now, those of us who wish to believe in a world where people look out for each other will have to take it upon themselves to lead by example. We should learn how to translate our digital frustrations to analog action.

Fake Cancerous Nodes in CT Scans, Created By Malware, Trick Radiologists

Researchers in Israel say they have developed malware to draw attention to serious security weaknesses in critical medical imaging equipment used for diagnosing conditions and the networks that transmit those images — vulnerabilities that could have potentially life-altering consequences if unaddressed. The malware they created would let attackers automatically add realistic, malignant-seeming growths to CT or MRI scans before radiologists and doctors examine them. Or it could remove real cancerous nodules and lesions without detection, leading to misdiagnosis and possibly a failure to treat patients who need critical and timely care.

Yisroel Mirsky, Yuval Elovici and two others at the Ben-Gurion University Cyber Security Research Center in Israel who created the malware say that attackers could target a presidential candidate or other politicians to trick them into believing they have a serious illness and cause them to withdraw from a race to seek treatment. The research isn’t theoretical. In a blind study the researchers conducted involving real CT lung scans, 70 of which were altered by their malware, they were able to trick three skilled radiologists into misdiagnosing conditions nearly every time. In the case of scans with fabricated cancerous nodules, the radiologists diagnosed cancer 99 percent of the time. In cases where the malware removed real cancerous nodules from scans, the radiologists said those patients were healthy 94 percent of the time.

Are Phone-Addicted Drivers More Dangerous Than Drunk Drivers?

After crunching data on 4.5 billion miles of driving, road-safety analytics company Zendrive concludes there’s a new threat which just last year claimed the lives of 6,227 pedestrians: drivers “under the influence of a smartphone.”

The study points out that drunk driving fatalities peak after midnight, while distracted driving happens all day, conluding that distracted driving is now a bigger threat than drunk driving.

“Phone addicts are the new drunk drivers,” Zendrive concludes bluntly in its annual distracted driving study. The big picture: The continued increase in unsafe driving comes despite stricter laws in many states, as well as years of massive ad campaigns from groups ranging from cell phone carriers to orthopedic surgeons. “They hide in plain sight, blatantly staring at their phones while driving down the road,” Zendrive says in the study.

And it’s a growing problem. Over just the past year, Zendrive, which analyzes driver behavior for fleets and insurers, said the number of hardcore phone addicts doubled, now accounting for one in 12 drivers. If the current trend continues, that number will be one in five by 2022.

The report concludes drivers are 10 percent more distracted this year than last — and that phone addicts have their eyes off the road for 28% of their drive. Yet when asked to describe their driving, 93% of phone addicts said they believed they were “safe” — or “extremely safe” — drivers.

One even insisted that they never texted while driving, “but I like to FaceTime my friends while driving since it makes time go by faster.”

Paris street to ‘shut out Instagrammers’

Instagrammers love the colorful homes in Paris’s Rue Cremieux. But residents of Rue Cremieux have now had enough and are calling on the city council to restrict access at certain times.

Residents have asked the city council to provide a gate that can be closed at peak times — evenings, weekends and at sunrise and sunset, when good light attracts people searching for a perfect Instagram picture. One resident told radio station France Info: “We sit down to eat and just outside we have people taking photos, rappers who take two hours to film a video right beneath the window, or bachelorette parties who scream for an hour. Frankly, it’s exhausting.”

Nearly All US Teens Short On Sleep, Exercise

Too little sleep. Not enough exercise. Far too much “screen time.” That is the unhealthy lifestyle of nearly all U.S. high school students, new research finds. The study, of almost 60,000 teenagers nationwide, found that only 5 percent were meeting experts’ recommendations on three critical health habits: sleep; exercise; and time spent gazing at digital media and television… “Five percent is a really low proportion,” said study leader Gregory Knell, a research fellow at University of Texas School of Public Health, in Dallas. “We were a bit surprised by that….”

“If kids are viewing a screen at night — staring at that blue light — that may affect their ability to sleep,” Knell said. “And if you’re not getting enough sleep at night, you’re going to be more tired during the day,” he added, “and you’re not going to be as physically active.”

Experts recommend a minimum of 8 hours of sleep at night for teenagers, plus at least one hour every day of “moderate to vigorous” exercise.

One professor of adolescent medicine points out that some high school homework now even requires using a computer — even though too much screen time can affect teenagers’ abiity to sleep.

Facebook decided which users are interested in Nazis—and let advertisers target them directly

Facebook makes money by charging advertisers to reach just the right audience for their message–even when that audience is made up of people interested in the perpetrators of the Holocaust or explicitly neo-Nazi music. Despite promises of greater oversight following past advertising scandals, a Times review shows that Facebook has continued to allow advertisers to target hundreds of thousands of users the social media firm believes are curious about topics such as “Joseph Goebbels,” “Josef Mengele,” “Heinrich Himmler,” the neo-nazi punk band Skrewdriver and Benito Mussolini’s long-defunct National Fascist Party.

Experts say that this practice runs counter to the company’s stated principles and can help fuel radicalization online. “What you’re describing, where a clear hateful idea or narrative can be amplified to reach more people, is exactly what they said they don’t want to do and what they need to be held accountable for,” said Oren Segal, director of the Anti-Defamation League’s center on extremism. After being contacted by The Times, Facebook said that it would remove many of the audience groupings from its ad platform.

Facebook’s broad reach and sophisticated advertising tools brought in a record $55 billion in ad revenue in 2018. Profit margins stayed above 40%, thanks to a high degree of automation, with algorithms sorting users into marketable subsets based on their behavior — then choosing which ads to show them. But the lack of human oversight has also brought the company controversy.

In 2017, Pro Publica found that the company sold ads based on any user-generated phrase, including “Jew hater” and “Hitler did nothing wrong.” Following the murder of 11 congregants at a synagogue in Pittsburgh in 2018, the Intercept found that Facebook gave advertisers the ability to target users interested in the anti-Semitic “white genocide conspiracy theory,” which the suspected killer cited as inspiration before the attacks.

Internet Addiction Spawns US Treatment Programs

When Danny Reagan was 13, he began exhibiting signs of what doctors usually associate with drug addiction. He became agitated, secretive and withdrew from friends. He had quit baseball and Boy Scouts, and he stopped doing homework and showering. But he was not using drugs. He was hooked on YouTube and video games, to the point where he could do nothing else. As doctors would confirm, he was addicted to his electronics. “After I got my console, I kind of fell in love with it,” Danny, now 16 and a junior in a Cincinnati high school, said. “I liked being able to kind of shut everything out and just relax.” Danny was different from typical plugged-in American teenagers. Psychiatrists say internet addiction, characterized by a loss of control over internet use and disregard for the consequences of it, affects up to 8 percent of Americans and is becoming more common around the world.

“We’re all mildly addicted. I think that’s obvious to see in our behavior,” said psychiatrist Kimberly Young, who has led the field of research since founding the Center for Internet Addiction in 1995. “It becomes a public health concern obviously as health is influenced by the behavior.” At first, Danny’s parents took him to doctors and made him sign contracts pledging to limit his internet use. The “Reboot” program at the Lindner Center for Hope offers inpatient treatment for 11 to 17-year-olds who, like Danny, have addictions including online gaming, gambling, social media, pornography and sexting, often to escape from symptoms of mental illnesses such as depression and anxiety. Reboot patients spend 28 days at a suburban facility equipped with 16 bedrooms, classrooms, a gym and a dining hall. They undergo diagnostic tests, psychotherapy, and learn to moderate their internet use.

Schools Are Locking Students’ Phones Away to Help With Concentration

After one teacher at San Lorenzo High School brought pouches, created by the tech start-up Yondr, into her classroom to lock away students’ phones, the entire school began using them from the beginning of the school day at 8 a.m. until the end of the day at 3:10 p.m. According to a 2018 study from the Pew Research Center, more than half of teens said they felt loneliness, anxiety, or upset in the absence of a cellphone. The study also found that girls were more likely to feel these sentiments than boys.

“If something feels weird about modern life to young kids who are dealing with a lot of angst and anxiety in general, maybe it has something to do with relating to the world primarily through a screen eight hours a day,” Yondr’s founder Graham Dugoni told CNBC. Students said they initially felt awkward and annoyed having their phones taken away during the school day, but added that they started to see more teens interacting with each other. One student added that not having a phone in class helped with concentration.

Killing tourist destinations for an Instagram photo

Overtourism is taking a toll across the globe, with closures of popular destinations in Thailand and the Philippines, and backlash from residents in cities like Venice and Barcelona. Closer to home, places like Bali, Byron Bay and parts of Tasmania have also been feeling pressure from skyrocketing visitors.

“The problem we’ve got is that we’re all congregating on the same places at the same time of the year,” says Justin Francis, CEO of the UK-based Responsible Travel.

Mr Francis says part of the problem is that the “ethos of travel” is changing: in the social media era, it’s now more about “where you want to be seen”. “Getting the photo and getting it on Instagram or Facebook is becoming the purpose of the trip — it’s the reason for going,” he says.

Travellers have also been drawn to places from their favourite films or TV shows, in a trend known as “set jetting”.

Have Smartphones Destroyed a Generation?

More comfortable online than out partying, post-Millennials are safer, physically, than adolescents have ever been. But they’re on the brink of a mental-health crisis.

Link Between Social Media and Depression Stronger In Teen Girls Than Boys, Study Says

According to a new study published in the journal EClinicalMedicine, the link between social media use and depressive symptoms in 14-year-olds may be much stronger for girls than boys. CNN reports:
Among teens who use social media the most — more than five hours a day — the study showed a 50% increase in depressive symptoms among girls versus 35% among boys, when their symptoms were compared with those who use social media for only one to three hours daily. Yet the study, conducted in the UK, showed only an association between social media use and symptoms of depression, which can include feelings of unhappiness, restlessness or loneliness. The findings cannot prove that frequent social media use caused depressive symptoms, or vice versa. The study also described other factors, such as lack of sleep and cyberbullying, that could help explain this association.

For the study, researchers analyzed data on 10,904 14-year-olds who were born between 2000 and 2002 in the United Kingdom. The data, which came from the UK Millennium Cohort Study, included information from questionnaires on the teens’ depressive symptoms and social media use. Depressive symptoms were recorded as scores, and the researchers looked at which teens had high or low scores. They found that on average, girls had higher depressive symptom scores compared with boys. The researchers also found that girls reported more social media use than boys; 43.1% of girls said they used social media for three or more hours per day, versus 21.9% of boys. The data showed that for teens using social media for three to five hours, 26% of girls and 21% of boys had depressive symptom scores higher than those who used social media for only about one to three hours a day.
As for the gender gap, Yvonne Kelly, first author of the study and professor of epidemiology and public health, believes it has to do with “the types of things that girls and boys do online.”

“In the UK, girls tend to more likely use things like Snapchat or Instagram, which is more based around physical appearance, taking photographs and commenting on those photographs,” she said. “I think it has to do with the nature of use.”

“How YouTube’s Year-in-Review ‘Rewind’ Video Set Off a Civil War”

You might guess that a surefire way to make a hit video on YouTube would be to gather a bunch of YouTube megastars, film them riffing on some of the year’s most popular YouTube themes and release it as a year-in-review spectacular. You would be wrong.

The issue that upset so many YouTube fans, it turns out, was what the Rewind video did not show. To many, it felt like evidence that YouTube the company was snubbing YouTube the community by featuring mainstream celebrities in addition to the platform’s homegrown creators, and by glossing over major moments in favor of advertiser-friendly scenes.

If YouTube had been trying to create an accurate picture of its platform’s most visible faces, it would need to include bigots, reactionaries and juvenile shock jocks. A YouTube recap that includes only displays of tolerance and pluralism is a little like a Weather Channel highlight reel featuring only footage of sunny days — it might be more pleasant to look at, but it doesn’t reflect the actual weather.

As smartphones and internet connections rise in Africa, so does entertainment streaming

Growing smartphone and internet penetration across many African countries saw global streaming companies make a deeper play for audiences here this year.

Netflix signaled its interest in Africa by hiring a content producer for the region and took on the MultiChoice, the continental satellite TV giant owned by Naspers, Africa’s most valuable company. The Los Gatos, California company spooked MultiChoice with everything from trolling online ads to billboards placed conspicuously close to their Johannesburg headquarters. MultiChoice has clearly taken notice and has called for Netflix to be regulated

“Influencers” Are Being Paid Big Sums To Pitch Products and Thrash Rivals on Instagram and YouTube

“Influencers” are being paid big sums to pitch products on Instagram and YouTube. If you’re trying to grow a product on social media, you either fork over cash or pay in another way. This is the murky world of influencing, reports Wired. Brands will pay influencers to position products on their desks, behind them, or anywhere else they can subtly appear on screen. Payouts increase if an influencer tags a brand in a post or includes a link, but silent endorsements are often preferred.

Marketers of literature, wellness, fashion, entertainment, and other wares are all hooked on influencers. As brands have warmed to social-media advertising, influencer marketing has grown into a multibillion-dollar industry. Unlike traditional television or print ads, influencers have dedicated niche followings who take their word as gospel.

There’s another plus: Many users don’t view influencers as paid endorsers or salespeople—even though a significant percentage are—but as trusted experts, friends, and “real” people. This perceived authenticity is part of why brands shell out so much cash in exchange for a brief appearance in your Instagram feed.

What Your Phone is Telling Wall Street

Your phone knows where you shop, where you work and where you sleep. Hedge funds are very interested in such data, so they are buying it.

When Tesla Chief Executive Elon Musk said the car maker would work around the clock to boost production of its Model 3 sedan, the number crunchers at Thasos Group decided to watch. They circled Tesla’s 370 acres in Fremont, Calif., on an online map, creating a digital corral to isolate smartphone location signals that emanated from within it. Thasos, which leases databases of trillions of geographic coordinates collected by smartphone apps, set its computers to find the pings created at Tesla’s factory, then shared the data with its hedge-fund clients [Editor’s note: the link may be paywalled; alternative source], showing the overnight shift swelled 30% from June to October.

Last month, many on Wall Street were surprised when Tesla disclosed a rare quarterly profit, the result of Model 3 production that had nearly doubled in three months. Shares shot up 9.1% the next day. Thasos is at the vanguard of companies trying to help traders get ahead of stock moves like that using so-called alternative data. Such suppliers might examine mine slag heaps from outer space, analyze credit-card spending data or sort through construction permits. Thasos’s specialty is spewing out of your smartphone.

Thasos gets data from about 1,000 apps, many of which need to know a phone’s location to be effective, like those providing weather forecasts, driving directions or the whereabouts of the nearest ATM. Smartphone users, wittingly or not, share their location when they use such apps. Before Thasos gets the data, suppliers scrub it of personally identifiable information, Mr. Skibiski said. It is just time-stamped strings of longitude and latitude. But with more than 100 million phones providing such coordinates, Thasos says it can paint detailed pictures of the ebb and flow of people, and thus their money.

When Tech Knows You Better Than You Know Yourself

Algorithms are kind of running where 2 billion people spend their time. Seventy percent of what people watch on YouTube is driven by recommendations from the algorithm. People think that what you’re watching on YouTube is a choice. People are sitting there, they sit there, they think, and then they choose. But that’s not true. Seventy percent of what people are watching is the recommended videos on the right hand side, which means 70 percent of 1.9 billion users, that’s more than the number of followers of Islam, about the number followers of Christianity, of what they’re looking at on YouTube for 60 minutes a day—that’s the average time people spend on YouTube. So you got 60 minutes, and 70 percent is populated by a computer. The machine is out of control.

What Children Want to Know About Computers

When visiting a series of eight primary school class rooms recently, I talked to children (aged 5 -12 years old) about how computers work. They drew pictures of what they thought is inside a computer, and then we discussed the drawings as a class. Many of the children knew the names of the components within a computer: a chip, memory, a disc, and they were often insistent that there should be a fan in there. They knew that there would be wires inside, and that it would need a battery to make it work. The child who created drawing 1 has made a nice job of piecing together a possible design from what they knew about computers – can you spot what is missing though?

Drawing 1.

Drawing 2.

The artist of drawing 2 knows there is a chip inside (made by HP in this case!) and to their credit they know there is code too. Notice that the code is not physically located on the memory or the chip but somewhere in the wires. In general there was some puzzlement about how code related to the computer, as exemplified by the artist of drawing 3 who confessed “I know a computer is full of code and all devices. I am not sure what it looked like so I just scribbled.”

Drawing 3. 

Often, the children spent a while thinking about what is outside the computer and how information might get inside. It was quite common to see pictures in which the artist had folded the page to show this distinction but it was often a mystery how pressing a key or touching the screen might make something happen in the computer. Children who had spent time tinkering with computers at home had an advantage here: “I broke my keyboard once and I saw what was inside. It would send a signal from key to computer to the monitor”.

What the pictures, and subsequent classroom discussions told me is that the children know names of components within a computer, and possibly some isolated facts about them. None of the pictures showed accurately how the components work together to perform computation, although the children were ready and willing to reason about this with their classmates. Although some of the children had programmed in the visual programming language, none of them knew how the commands they wrote in Scratch would be executed in the hardware inside a computer. One boy, who had been learning about variables in Scratch the previous day wanted to know whether if he looked in his computer he would really see apps with boxes full of variables in them. I love that question because it reveals the mysterious boundary between intangible, invisible information and the small lump of silicon which processes it.

To be clear, I am not criticizing the children, who were curious, interested and made perfectly reasonable inferences based on the facts they picked up in their everyday lives. But I think that computer science educators can do better here. Our discipline is built upon the remarkable fact that we can write instructions in a representation which makes sense to humans and then automatically translate them into an equivalent representation which can be followed by a machine dumbly switching electrical pulses on and off. Children are not going to be able to figure that out for themselves by dissecting old computers or by making the Scratch cat dance. We need to get better at explicitly explaining this in interesting ways.

Children are currently piecing together their everyday experiences with technology with facts that adults tell them to try to make sense of how computers work. This can lead to some confusion, particularly if the adults in their lives are also unsure. One child thought, for example, that if you paid more money, then it would make Wi-Fi stronger. Others were curious about how Wi-Fi works on a train, and whether you really need to stop using your phone on a plane. A student advised the class that if we needed to save space on our phones, then we should delete videos from YouTube. The children, like most Windows users, wanted to know why their computers “freeze”, speculating that it could be because the chip is asleep or that too many people are using Wi-Fi. There was also a sense of wonderment and curiosity. A young boy was fascinated when he read about super computers and wanted to know more: do super computers have really big chips in them? A class of eleven-year-olds gravely debated whether people would be more or less clever if the computer had never been invented. These are the sorts of questions about computers which children want to explore. It’s our job as computer scientists, and as educators, to help them.

Brazil: Hit YouTubers become politicans, and win

Kim Kataguiri is known in Brazil for a lot of things. He’s been called a fascist. He’s been called a fake news kingpin. His organization, Movimento Brasil Livre (MBL) — the Free Brazil Movement — is like the Brazilian Breitbart. Or maybe it’s like the American tea party. Maybe it’s both. Is it a news network? Kataguiri says it isn’t. But it’s not a political party, either. He says MBL is just a bunch of young people who love free market economics and memes.

One thing is very clear: His YouTube channel, the memes, the fake news, and MBL’s army of supporters have helped Kataguiri, 22, become the youngest person ever elected to Congress in Brazil. He’s also trying to become Brazil’s equivalent of speaker of the House.

As the world panicked over whether Brazil’s far-right presidential frontrunner, Jair Bolsonaro, is more of a Trump or a Duterte, MBL pushed forward 16 of its own candidates. Six of them won on the federal level. More at the state and local levels. MBL’s YouTube channel has grown from zero to 1 million subscribers this year. MBL was on the front page of YouTube every day in the month leading up to the election. The plan is to have all of the group’s elected members start their own YouTube channels. Forty percent of MBL’s funding already comes from YouTube ads. MBL-affiliated YouTuber and newly elected state representative Arthur Mamãe Falei personally made $12,000 off his solo channel in October.

As Mamãe Falei simply puts it, “I guarantee YouTubers in Brazil are more influential than politicians.”

Kataguiri’s political awakening is a textbook example of the way algorithms beget more algorithms. During his last year of high school, his teacher started a debate about welfare programs in Brazil. So Kataguiri started googling. He discovered Ron Paul and the Brazilian libertarian YouTuber Daniel Fraga.

“Then I did a video to my teacher and my friends at school to talk about what I had found out,” Kataguiri says. “There was one problem: I posted this video on YouTube. So it was public and it went viral.”

He says people kept asking for more videos, but he didn’t know anything. So he went back to googling, and then made more videos about what he learned. His channel got bigger. He started connecting with other far-right and libertarian YouTubers. Brazil’s libertarian community started connecting on Facebook. Then, in 2013, Ron Paul visited a conference in Brazil, and suddenly the online community became a real-life one.

That’s when MBL started to form. He says the emphasis on economic theory within the libertarian movement was uninspiring. He wanted to start a group that got young people excited. By 2015, his videos were starting to draw a huge audience.

The main MBL Facebook page has about 3 million followers. Since 2014, it’s functioned more or less as the group’s main hub. But Kataguiri says that due to concerns over News Feed algorithm changes and Facebook’s banning of its pages this summer, MBL has begun to diversify. It has about 300,000 Twitter followers and about a half million on Instagram. Kataguiri says he doesn’t know anything about the American far-right Twitter clone Gab, which has recently become big in Brazil. But MBL does have a page there. The real crown jewels of MBL’s digital operation right now are YouTube and WhatsApp.

“First, we get news from somewhere,” he says. “There’s news from Folha de São Paulo, there’s news from Globo, there’s news from anywhere, but we choose the news that the public wants to read. We basically curate.”

Then, he says, they decide how to manipulate that news to fit their message.

“Nowadays, people only read the headline, and they already want to have an opinion before reading the news. Basically, what we offer them is, ‘This is the news, in two phrases — this is what we think about it.’”

And finally, the third step: “Basically something to make people laugh and have an incentive to share it with their friends,” he says.

[…]

Brazil has a history of unorthodox candidates running for office: porn stars, footballers, a guy dressed up like Batman. A TV clown has been reelected a few times. It’s not out of the realm of possibility that a YouTuber who works at a scrap metal yard and gets beat up at protests could be elected to local office in Brazil. The key difference with do Val is that he got half a million votes. That’s a fourth of his YouTube audience and an absurd number for a state election.

Universal Basic Income, Silicon Valley’s push for our further enslavement

Douglas Rushkoff, long-time open source advocate (and currently a professor of Digital Economics at the City University of New York, Queens College), is calling Universal Basic Incomes “no gift to the masses, but a tool for our further enslavement.”

Uber’s business plan, like that of so many other digital unicorns, is based on extracting all the value from the markets it enters. This ultimately means squeezing employees, customers, and suppliers alike in the name of continued growth. When people eventually become too poor to continue working as drivers or paying for rides, UBI supplies the required cash infusion for the business to keep operating. When it’s looked at the way a software developer would, it’s clear that UBI is really little more than a patch to a program that’s fundamentally flawed. The real purpose of digital capitalism is to extract value from the economy and deliver it to those at the top. If consumers find a way to retain some of that value for themselves, the thinking goes, you’re doing something wrong or “leaving money on the table.”

Walmart perfected the softer version of this model in the 20th century. Move into a town, undercut the local merchants by selling items below cost, and put everyone else out of business. Then, as sole retailer and sole employer, set the prices and wages you want. So what if your workers have to go on welfare and food stamps. Now, digital companies are accomplishing the same thing, only faster and more completely…. Soon, consumers simply can’t consume enough to keep the revenues flowing in. Even the prospect of stockpiling everyone’s data, like Facebook or Google do, begins to lose its allure if none of the people behind the data have any money to spend. To the rescue comes UBI.

The policy was once thought of as a way of taking extreme poverty off the table. In this new incarnation, however, it merely serves as a way to keep the wealthiest people (and their loyal vassals, the software developers) entrenched at the very top of the economic operating system. Because of course, the cash doled out to citizens by the government will inevitably flow to them… Under the guise of compassion, UBI really just turns us from stakeholders or even citizens to mere consumers. Once the ability to create or exchange value is stripped from us, all we can do with every consumptive act is deliver more power to people who can finally, without any exaggeration, be called our corporate overlords… if Silicon Valley’s UBI fans really wanted to repair the economic operating system, they should be looking not to universal basic income but universal basic assets, first proposed by Institute for the Future’s Marina Gorbis… As appealing as it may sound, UBI is nothing more than a way for corporations to increase their power over us, all under the pretense of putting us on the payroll. It’s the candy that a creep offers a kid to get into the car or the raise a sleazy employer gives a staff member who they’ve sexually harassed. It’s hush money.

Rushkoff’s conclusion? “Whether its proponents are cynical or simply naive, UBI is not the patch we need.”

We hold people with power to account. Why not algorithms?

Robert Jones was driving home through the pretty town of Todmorden, in West Yorkshire, when he noticed the fuel light flashing on the dashboard of his car. He had just a few miles to find a petrol station, which was cutting things rather fine, but thankfully his GPS seemed to have found a short cut – sending him on a narrow winding path up the side of the valley.

Robert followed the machine’s instructions, but as he drove, the road got steeper and narrower. After a couple of miles, it turned into a dirt track, but Robert wasn’t fazed. After all, he thought, he had “no reason not to trust the satnav”.

Just a short while later, anyone who happened to be looking up from the valley below would have seen the nose of Robert’s BMW appearing over the brink of the cliff above, saved from the 100ft drop only by the flimsy wooden fence at the edge he had just crashed into. “It kept insisting the path was a road,” he told the Halifax Courier after the incident. “So I just trusted it. You don’t expect to be taken nearly over a cliff.”

I can imagine Robert was left red-faced by his blunder, but in a way, I think he’s in good company. When it comes to placing blind faith in a piece of software, his mistake was one we’re almost all capable of making. In our urge to automate, in our eagerness to adopt the latest innovations, we appear to have developed a habit of unthinkingly handing over power to machines.

All around us, algorithms provide a kind of convenient source of authority: an easy way to delegate responsibility, a short cut we take without thinking. Who is really going to click through to the second page of Google results every time and think critically about the information that has been served up? Or go to every airline to check if a comparison site is listing the cheapest deals? Or get out a ruler and a road map to confirm that their GPS is offering the shortest route?

But already in our hospitals, our schools, our shops, our courtrooms and our police stations, artificial intelligence is silently working behind the scenes, feeding on our data and making decisions on our behalf. Sure, this technology has the capacity for enormous social good – it can help us diagnose breast cancer, catch serial killers, avoid plane crashes and, as the health secretary, Matt Hancock, has proposed, potentially save lives using NHS data and genomics. Unless we know when to trust our own instincts over the output of a piece of software, however, it also brings the potential for disruption, injustice and unfairness.

If we permit flawed machines to make life-changing decisions on our behalf – by allowing them to pinpoint a murder suspect, to diagnose a condition or take over the wheel of a car – we have to think carefully about what happens when things go wrong.

Back in 2012, a group of 16 Idaho residents with disabilities received some unexpected bad news. The Department of Health and Welfare had just invested in a “budget tool” – a swish piece of software, built by a private company, that automatically calculated their entitlement to state support. It had declared that their care budgets should be slashed by several thousand dollars each, a decision that would put them at serious risk of being institutionalised.

The problem was that the budget tool’s logic didn’t seem to make much sense. While this particular group of people had deep cuts to their allowance, others in a similar position actually had their benefits increased by the machine. As far as anyone could tell from the outside, the computer was essentially plucking numbers out of thin air.

From the inside, this wasn’t far from the truth. It would take four years and a class-action lawsuit to get to the bottom of what was going on, but when the budget tool was eventually handed over for scrutiny, a hint of what we all have to fear from the unrestricted power of machine decision-making was revealed.

The algorithm was junk. The data was riddled with errors. The calculations were so bad that the court would eventually rule its determinations unconstitutional. It had, effectively, been awarding benefits at random. And yet, when dressed up as a slick and glossy new computer programme, the algorithm brought with it an air of authority that was difficult to argue against.

In the days before proper regulation of medicines, you used to be able to bottle up any coloured liquid and make a small fortune selling it as a cure-all. Today, in the still largely unfettered world of AI and algorithms, we’re seeing people make bold, unsubstantiated and sometimes irresponsible claims about their inventions.

There’s only one way to prevent this. I think it’s time we started treating machines as we would any other source of power. I would like to propose a system of regulation for algorithms, and perhaps a good place to start would be with Tony Benn’s five simple questions, designed for powerful people, but equally applicable to modern AI:

“What power have you got?
“Where did you get it from?
“In whose interests do you use it?
“To whom are you accountable?
“How do we get rid of you?”

Because, ultimately, we can’t just think of algorithms in isolation. We have to think of the failings of the people who design them – and the danger to those they are supposedly designed to serve.