Resources

TikTokers Are Accused of Starting Forest Fires For Views

Humaira Asghar, known as “Dolly” to her 11.5 million TikTok fans, faces charges for allegedly setting a forest fire while shooting a TikTok video in Pakistan’s capital city Islamabad. In the 11-second clip that has since been taken down, Asghar dramatically walks down a forested hill covered in flames in slow motion with a trending pop song that mentions “setting fire” playing in the background. The caption posted with the video shot in the Margalla Hills National Park reads, “fire erupts wherever I am.” Asghar is not the only Pakistani TikToker who has been accused of setting a forest fire for views. Officials say it is an emerging trend in a country that is suffering from a record-breaking heatwave.

“Young people desperate for followers are setting fire to our forests during this hot and dry season,” tweeted Islamabad Wildlife Management Board chairperson Rina S Khan Satti. “These psychotic young people have to be caught and put behind bars immediately.” Earlier this month, a man in Abbottabad city was arrested for intentionally starting a forest fire to use as a backdrop in his video. In another recently released video, two men are seen appearing to start a forest fire then running away from it while music plays in the background.

218

Why the Past 10 Years of American Life Have Been Uniquely Stupid

Jonathan Haidt, a social psychologist at the New York University’s School of Business, argues in the Atlantic that social-media platforms “trained users to spend more time performing and less time connecting.” But that was just the beginning.

He now believes this ultimately fueled a viral dynamic leading to “the continual chipping-away of trust” in a democracy which “depends on widely internalized acceptance of the legitimacy of rules, norms, and institutions.”
The most recent Edelman Trust Barometer (an international measure of citizens’ trust in government, business, media, and nongovernmental organizations) showed stable and competent autocracies (China and the United Arab Emirates) at the top of the list, while contentious democracies such as the United States, the United Kingdom, Spain, and South Korea scored near the bottom (albeit above Russia)…. Mark Zuckerberg may not have wished for any of that. But by rewiring everything in a headlong rush for growth — with a naive conception of human psychology, little understanding of the intricacy of institutions, and no concern for external costs imposed on society — Facebook, Twitter, YouTube, and a few other large platforms unwittingly dissolved the mortar of trust, belief in institutions, and shared stories that had held a large and diverse secular democracy together.

In the last 10 years, the article argues, the general public — at least in America — became “uniquely stupid.” And he’s not just speaking about the political right and left, but within both factions, “as well as within universities, companies, professional associations, museums, and even families.” The article quotes former CIA analyst Martin Gurri’s comment in 2019 that the digital revolution has highly fragmented the public into hostile shards that are “mostly people yelling at each other and living in bubbles of one sort or another.”

The article concludes that by now U.S. politics has entered a phase where truth “cannot achieve widespread adherence” and thus “nothing really means anything anymore–at least not in a way that is durable and on which people widely agree.” It even contemplates the idea of “highly believable” disinformation generated by AI, possibly by geopolitical adversaries, ultimately evolving into what the research manager at the Stanford Internet Observatory has described as “an Information World War in which state actors, terrorists, and ideological extremists leverage the social infrastructure underpinning everyday life to sow discord and erode shared reality.”

225

Because of the Internet, ‘We Are All Cranks Now’

…from the characters of Dickens to Grandpa Simpson, recently it seems that the figure of the crank has dropped away from the public imagination. Now, this could be that the art of penning demented letters to metropolitan newspapers on a semi-regular basis may be dying out with the last generation of people to use lead toothpaste, but I don’t think that’s it. I think we’ve lost sight of them not because they went away, but because they became unremarkable. We are all cranks now.

Online has drastically lowered the barriers of entry into the Order of Crankhood. Time it was when if you really wanted to get publicly steamed about something you’d read, you’d first have to buy a newspaper, read that newspaper, get steamed, go to your writing desk, jot down your letter, put that letter in an envelope, find a stamp, and then walk to the post office. And even after doing all that, there was no guarantee that it would be published. Being a crank even 30 years ago took a kind of monastic dedication to the high art of being a weirdo, but nowadays, saying something deeply unwell about an article you don’t like to thousands of people is as trivial as ordering a coffee.a

And if the internet in general has lowered these barriers, social media has gone a step further. People who never set out to be cranks in the first place are actively incentivized to do so. This isn’t just because whenever you post you get a thrilling little tally of all the people who agree with you, it’s because of how these platforms are designed to maximize engagement. The ideal poster for social media companies is one who posts often, who posts stridently, and who responds to as much stuff as possible.

So, to be on Twitter or Facebook is to sit in a room while someone holds up random pieces of stimulus and demands your appraisal of each. What do we reckon of this? Okay, how about this? And this? What’s your view here? Were you to design a machine to turn otherwise normal, healthy people into cranks — a kind of crankification engine, if you like — you would probably arrive at something like these platforms.

257

The White House is briefing TikTok stars about the war in Ukraine

This week, the administration began working with Gen Z For Change, a nonprofit advocacy group, to help identify top content creators on the platform to orchestrate a briefing aimed at answering questions about the conflict and the United States’ role in it.

The briefing was led by Matt Miller, a special adviser for communications at the White House National Security Council, and Psaki. The Washington Post obtained a recording of the call, and in it, Biden officials stressed the power these creators had in communicating with their followers. “We recognize this is a critically important avenue in the way the American public is finding out about the latest,” said the White House director of digital strategy, Rob Flaherty, “so we wanted to make sure you had the latest information from an authoritative source.”

305

Some Amazon Ring Customers Demand Drivers Dance, Then Post Videos Online

Some Amazon customers are now explicitly asking the company’s drivers to deliver a performance along with the package. They are posting signs to their front doors or tapping unusual delivery instructions into the Amazon app in the hopes of capturing a spectacle on their surveillance feeds…. [T]hese customers proceed to shamelessly post the evidence to social media. Sometimes the videos are spun into an online sleuthing opportunity, as the TikToker asks viewers to hunt for the dancing driver’s identity. And they represent just a slice of the “Amazon driver approaches the door” genre of internet video… But whether the video is pitched as heartwarming or sadistic, the customer is enlisting the driver into a nonconsensual pageant that doubles as a performance review. As Jackson reported, Amazon drivers who fail to fulfill customer requests risk demerits.

Amazon encourages customers to publicize their Ring videos on its safety-minded social network, Neighbors, and makes it easy to share them more widely, too. One of Ring’s marketing lines is “A lot happens at your front door,” and this is meant as both a warning and an invitation — though it suggests it is too dangerous to venture outside, it also implies that a whole world of entertainment is to be found through eyeing your surveillance feed. The official Ring YouTube channel is filled with user-generated videos that help inject its growing spy network with warmth and surprise, as the cameras catch spontaneous footage of good Samaritans, grazing cows and, of course, the company’s drivers caught in kooky scenarios, like in this entry from December: “Even a Giant Bear Will Not Stop This Amazon Driver From Making His Delivery.”

Amazon obsessively surveils its workers through dashcams, smartphone monitors and machine-generated report cards, and these videos implicate the customer in that exercise, making the violation of driver privacy into a kind of internet-wide contest. The caption for Amazon’s bear video focuses on the heroic actions of a Ring user named Josh, who supposedly aided the delivery driver’s safety by “watching his exit the whole time” on the security camera…. Its routes are often serviced by precarious gig workers, its quotas are too punishing to allow for socializing, and all potential human interactions have been replaced by one-way surveillance. In many of these TikTok videos, Amazon workers literally run in and out of the frame. If delivery drivers were once lightly teased or frequently ogled, now they are simply dehumanized, plugged into machine-run networks and expected to move product with robotic efficiency. The compulsory dance trend on TikTok suggests that customers, too, have come to see drivers as programmable….

On an even more depressing corner of Amazon TikTok, customers post videos not to backwardly celebrate drivers but just to shame them for delivering the package with less than the customer’s expected level of service.

264

The Internet Gave Rise to ‘Cancel Culture OCD’

Today, the phrase “cancel culture” triggers a wide range of responses: concern, frustration, a bit of eye-rolling. There are endless debates about what it is (accountability or censorship?), what’s driving it (context collapse, perhaps, or a new “woke” religion), and whether it even exists. Few public figures have been successfully canceled; even fewer have stayed canceled. Yet online life remains suffused with a distinct air of paranoia and an often-pacifying doubt—and perhaps focusing on the “cancel” part of cancel culture distracts from its rippling effects in our daily lives. The old saying goes, “If you’ve done nothing wrong, you have nothing to fear.” But the experiences of people with cancellation OCD reveal another truth: Scrutinize yourself too closely and you can always find something wrong.

278

10 year old boy lives life as a YouTube advertising sensation

There’s no one way to describe what Kaji, who is now 10 years old, has done across his multiple YouTube channels, cable television shows and live appearances: In one video, he is giving you a tour of the Legoland Hotel; in another, he splashes around in his pool to introduce a science video about tsunamis. But for years, what he has mostly done is play with toys: Thomas the Tank Engine, “Paw Patrol” figures, McDonald’s play kitchens. A new toy and a new video for almost every day of the week, adding up to an avalanche of content that can overwhelm your child’s brain, click after click. Kaji has been playing with toys on camera since Barack Obama was in the White House.

Here are a few of the companies that are now paying him handsomely for his services: Amazon, Walmart, Nickelodeon, Skechers. Ryan also has 10 separate YouTube channels, which together make up “Ryan’s World” [31.2M subscribers], a content behemoth whose branded merchandise took in more than $250 million last year. Even conservative estimates suggest that the Kaji family take exceeds $25 million annually.

309

Is the Internet Changing the Way We Remember?

“A study in 2019 found that the spatial memory used for navigating through the world tends to be worse for people who’ve made extensive use of map apps and GPS devices…” reports NBC News.

But that’s just the beginning, according to Adrian Ward, who studies psychology at the University of Texas at Austin. NBC says Ward’s research suggests “People who lean on a search engine such as Google may get the right answers but they can also end up with a wrong idea of how strong their own memory is.”

In Ward’s research, published in October in the journal Proceedings of the National Academy of Sciences of the United States, he used a series of eight experiments to test how people used and thought about their own knowledge as they completed short quizzes of general knowledge. Some participants had access to Google while answering the questions — “What is the most widely spoken language in the world?” was one — while others did not. They also completed surveys. He found that people who used Google were more confident in their own ability to think and remember, and erroneously predicted that they would know significantly more in future quizzes without the help of the internet. Ward attributed that to Google’s design: simple and easy, less like a library and more like a “neural prosthetic” that simulates a search in a human brain.

“The speed makes it so you never understand what you don’t know,” Ward said.

The findings echo and build on earlier research, including a widely cited 2011 paper on the “Google effect”: a phenomenon in which people are less likely to remember information if they know they can find it later on the internet…. In a review of recent studies in the field, published in September, researchers at Duke University found that the “externalization” of memories into digital spheres “changes what people attend to and remember about their own experiences.” Digital media is new and different, they wrote, because of factors such as how easily images are edited or the huge number of memories at people’s fingertips.

Each photographic cue means another chance for a memory to be “updated,” maybe with a false impression, and each manipulation of a piece of social media content is a chance for distortion, wrote the researchers, doctoral student Emmaline Drew Eliseev and Elizabeth Marsh, a professor of psychology and neuroscience and director of a lab dedicated to studying memory.

349

Notifications Are Driving Us Crazy

We’re on alert overload. Stray comments and offhand requests once shouted across the office now blink and buzz at us from Microsoft Teams and Slack. Our communication has grown fragmented, spread across myriad apps we have to learn, conform to, remember to check.

Meanwhile, personal texts and social-media mentions have bled into the workday after all this time at home, adding another layer of distraction to our time on the clock. Why put your phone on silent if the boss isn’t hovering over you? Our culture has evolved to accommodate rapid communication, says Gloria Mark, a professor of informatics at the University of California, Irvine, and it can be mentally taxing. Many of us struggle to conjure up that brilliant thought that hit right before the notification burst in. “Your memory is just overflowing with information,” she says.

It doesn’t make for great circumstances for getting work done, but there are ways individuals, managers and organizations can contend with the onslaught. Dr. Mark’s research finds people switch screens an average of 566 times a day. Half the time we’re interrupted; the other half we pull ourselves away. Breaks — even mindless ones like scrolling Facebook — can be positive, replenishing our cognitive resources, Dr. Mark says. But when something external diverts our focus, it takes us an average of 25 minutes and 26 seconds to get back to our original task, she has found. (Folks often switch to different projects in between.) And it stresses us out. Research using heart monitors shows that the interval between people’s heart beats becomes more regular when they’re interrupted, a sign they’re in fight-or-flight mode. The onus is on teams and organizations to create new norms, Dr. Mark says. If individuals just up and turn off their notifications they’ll likely be penalized for missing information. Instead, managers should create quiet hours where people aren’t expected to respond. “It’s a matter of relearning how to work,” she says.

311

How China Uses Western Influencers As Pawns In Its Propaganda War

China is recruiting YouTubers to report on the country in a positive light and counter the West’s increasingly negative perceptions. “The videos have a casual, homespun feel. But on the other side of the camera often stands a large apparatus of government organizers, state-controlled news media and other official amplifiers — all part of the Chinese government’s widening attempts to spread pro-Beijing messages around the planet,” the report says. “State-run news outlets and local governments have organized and funded pro-Beijing influencers’ travel, according to government documents and the creators themselves. They have paid or offered to pay the creators. They have generated lucrative traffic for the influencers by sharing videos with millions of followers on YouTube, Twitter and Facebook.”

Typically, the Chinese government support comes in the form of free organized trips around China, particularly in Xinjiang. By showing the influencers a carefully sanitized image of life in the country, the authorities don’t need to worry about negative stories. They simply make it easy for the YouTubers to present images of jolly peasants and happy city-dwellers, because that’s all they are allowed to see. One of the authors of the New York Times piece, Paul Mozur, noted on Twitter another important way that the authorities are able to help their influencer guests. Once produced, the China-friendly videos are boosted massively by state media and diplomatic Facebook and Twitter accounts: “One video by Israeli influencer Raz Gal-Or portraying Xinjiang as ‘totally normal’ was shared by 35 government connected accounts with a total of 400 million followers. Many were Chinese embassy Facebook accounts, which posted about the video in numerous languages.”

A new report from the Australian Strategic Policy Institute, “Borrowing mouths to speak on Xinjiang,” has some more statistics on this practice: “Our data collection has found that, between January 2020 and August 2021, 156 Chinese state-controlled accounts on US-based social media platforms have published at least 546 Facebook posts, Twitter posts and shared articles from [China Global Television Network], Global Times, Xinhua or China Daily websites that have amplified Xinjiang-related social media content from 13 influencer accounts. More than 50% of that activity occurred on Facebook.” Mozur says that the use of Western influencers in this way also allows employees of Beijing-controlled media, like the journalist Li Jingjing, to present themselves as independent YouTubers. On Twitter, however, she is labeled as “China state-affiliated media.” The Australian Strategic Policy Institute sees this as part of a larger problem (pdf): “labelling schemes adopted by some video-sharing and social media platforms to identify state-affiliated accounts are inconsistently applied to media outlets and journalists working for those outlets. In addition, few platforms appear to have clear policies on content from online influencers or vloggers whose content may be facilitated by state-affiliated media, through sponsored trips, for example.”

According to Mozur, China’s state broadcaster is actively looking for more influencers, offering bonuses and publicity for those who sign up. In the US, China’s consulate general is paying $300,000 to a firm to recruit influencers for the Winter Olympics, ranging from Celebrity Influencers with millions of Instagram or TikTok followers, to Nano Influencers, with merely a few thousand. The ultimate goal of deploying these alternative voices is not to disprove negative stories appearing in Western media, but something arguably worse, as the New York Times report explains: “China is the new super-abuser that has arrived in global social media,” said Eric Liu, a former content moderator for Chinese social media. “The goal is not to win, but to cause chaos and suspicion until there is no real truth.”

312

AI influencers are taking over

311

How Facebook and Google fund global misinformation

MIT’s Technology Review shares data from a Facebook-run tool called CrowdTangle. It shows that by 2018 in the nation of Myanmar (population: 53 million), ” All the engagement had instead gone to fake news and clickbait websites.

“In a country where Facebook is synonymous with the internet, the low-grade content overwhelmed other information sources.”
[T]he sheer volume of fake news and clickbait acted like fuel on the flames of already dangerously high ethnic and religious tensions. It shifted public opinion and escalated the conflict, which ultimately led to the death of 10,000 Rohingya, by conservative estimates, and the displacement of 700,000 more. In 2018, a United Nations investigation determined that the violence against the Rohingya constituted a genocide and that Facebook had played a “determining role” in the atrocities. Months later, Facebook admitted it hadn’t done enough “to help prevent our platform from being used to foment division and incite offline violence.” Over the last few weeks, the revelations from the Facebook Papers, a collection of internal documents provided to Congress and a consortium of news organizations by whistleblower Frances Haugen, have reaffirmed what civil society groups have been saying for years: Facebook’s algorithmic amplification of inflammatory content, combined with its failure to prioritize content moderation outside the US and Europe, has fueled the spread of hate speech and misinformation, dangerously destabilizing countries around the world.

But there’s a crucial piece missing from the story. Facebook isn’t just amplifying misinformation.

The company is also funding it.

An MIT Technology Review investigation, based on expert interviews, data analyses, and documents that were not included in the Facebook Papers, has found that Facebook and Google are paying millions of ad dollars to bankroll clickbait actors, fueling the deterioration of information ecosystems around the world.

Facebook pays them for permission to open their content within Facebook’s app (where Facebook controls the advertising) rather than having users clickthrough to the publisher’s own web site, reports Technology Review:
Early on, Facebook performed little quality control on the types of publishers joining the program. The platform’s design also didn’t sufficiently penalize users for posting identical content across Facebook pages — in fact, it rewarded the behavior. Posting the same article on multiple pages could as much as double the number of users who clicked on it and generated ad revenue. Clickbait farms around the world seized on this flaw as a strategy — one they still use today… Clickbait actors cropped up in Myanmar overnight. With the right recipe for producing engaging and evocative content, they could generate thousands of U.S. dollars a month in ad revenue, or 10 times the average monthly salary — paid to them directly by Facebook. An internal company document, first reported by MIT Technology Review in October, shows that Facebook was aware of the problem as early as 2019… At one point, as many as 60% of the domains enrolled in Instant Articles were using the spammy writing tactics employed by clickbait farms, the report said…

75% of users who were exposed to clickbait content from farms run in Macedonia and Kosovo had never followed any of the pages. Facebook’s content-recommendation system had instead pushed it into their news feeds.

Technology Review notes that Facebook now pays billions of dollars to the publishers in their program. It’s a long and detailed article, which ultimately concludes that the problem “is now happening on a global scale.”
Thousands of clickbait operations have sprung up, primarily in countries where Facebook’s payouts provide a larger and steadier source of income than other forms of available work. Some are teams of people while others are individuals, abetted by cheap automated tools that help them create and distribute articles at mass scale…

Google is also culpable. Its AdSense program fueled the Macedonia- and Kosovo-based farms that targeted American audiences in the lead-up to the 2016 presidential election. And it’s AdSense that is incentivizing new clickbait actors on YouTube to post outrageous content and viral misinformation.

Reached for comment, a Facebook spokesperson told Technology Review that they’d misunderstood the issue. And the spokesperson also said “we’ve invested in building new expert-driven and scalable solutions to these complex issues for many years, and will continue doing so.”

Google’s spokesperson confirmed examples in the article violated their own policies and removed the content, adding “We work hard to protect viewers from clickbait or misleading content across our platforms and have invested heavily in systems that are designed to elevate authoritative information.”

309

Uber and Lyft equals more pollution

The efficiencies of ride hailing were supposed to all but end car ownership; instead vehicle sales are on the rise again this year, after a down year in 2020. There is also evidence that Uber and Lyft may actually spur an increase in car sales in cities where they begin operating.

Passengers have largely eschewed pooled rides and public transit in favor of private trips, leading to downtown bottlenecks in cities like San Francisco.

The need for drivers to circle around waiting for or fetching their next passenger, known as deadheading. Deadheading, Lyft and Uber estimated in 2019, is equal to about 40 percent of rideshare miles driven in six American cities. The researchers at Carnegie Mellon estimated that driving without a passenger leads to a roughly 20 percent overall increase in fuel consumption and greenhouse gas emissions compared to trips made by personal vehicles.

The researchers also found that switching from a private car to on-demand rides, like an Uber or Lyft, increased the external costs of a typical trip by 30 to 35 percent, or roughly 35 cents on average, because of the added congestion, collisions and noise from ridesharing services.

296

How Far Can You Go to Resist Filmers of a Viral Video?

Recently I saw eight seconds of video that capture this problem in its most extreme form. A boy and a girl, who appear to be of high school age, are walking into Panda Express when a third teenager with blond hair stops them in the doorway. He brings with him the energy of the hustler or the man-on-the-street interview host, and the couple are temporarily frozen, caught between suspicion and courtesy. It is a space where things could go either way. “Hey, hold on, excuse me — I have something really important to ask you,” the blond kid says to the girl. “The moment I saw you, my eyes were just — oh, my God, I love you, please could — bleagh!” The “bleagh” is the sound he makes when the other boy punches him in the face…

But perhaps what is most remarkable is the distinct moment of resignation that he and his girlfriend share when they realize what the blond kid is doing. Around the time he gets to “my eyes,” she turns away and steps inside, while Overalls Kid calmly sets his smoothie on the ground in preparation to jack his interlocutor in the mouth. The sound of the impact is meaty. The video ends with both of them stumbling out of the frame, Blond Kid reeling and Overalls Kid winding up for another blow. It’s an efficiently cut bit of action that rewards repeat viewings, but it left me with one question: How do we feel about that punch?

I think we can agree that a punch would not be justified if Blond Kid were professing his love sincerely. But he isn’t. He’s professing his love while an unidentified fourth party records the whole thing, presumably as part of the “hit on another guy’s girlfriend” internet challenge. In this context, he is using other people as props, a bad behavior that society should discourage. But what are we willing to condone in order to discourage it? Our collective culture has just begun to decide how we feel about this kind of activity, which has been invented by new technology and will only become more prevalent in the future.

The article ultimately argues that internet video apps belong to generation Z “the way heroin belongs to junkies. Seen from this perspective, Overalls Kid is part of a history of violent resistance to foreign influence that Americans will recognize in everything from the Boston Tea Party to Al Qaeda to the Ewoks.

“Our reams of fretting essays about how much the kids love phones tend to ignore who gave them phones in the first place.”

319

Facebook is Ditching Plans To Make an Interface That Reads the Brain

The spring of 2017 may be remembered as the coming-out party for Big Tech’s campaign to get inside your head. That was when news broke of Elon Musk’s new brain-interface company, Neuralink, which is working on how to stitch thousands of electrodes into people’s brains. Days later, Facebook joined the quest when it announced that its secretive skunkworks, named Building 8, was attempting to build a headset or headband that would allow people to send text messages by thinking — tapping them out at 100 words per minute. The company’s goal was a hands-free interface anyone could use in virtual reality. “What if you could type directly from your brain?” asked Regina Dugan, a former DARPA officer who was then head of the Building 8 hardware dvision. “It sounds impossible, but it’s closer than you realize.”

Now the answer is in — and it’s not close at all. Four years after announcing a “crazy amazing” project to build a “silent speech” interface using optical technology to read thoughts, Facebook is shelving the project, saying consumer brain-reading still remains very far off. In a blog post, Facebook said it is discontinuing the project and will instead focus on an experimental wrist controller for virtual reality that reads muscle signals in the arm. “While we still believe in the long-term potential of head-mounted optical [brain-computer interface] technologies, we’ve decided to focus our immediate efforts on a different neural interface approach that has a nearer-term path to market,” the company said. Facebook’s brain-typing project had led it into uncharted territory — including funding brain surgeries at a California hospital and building prototype helmets that could shoot light through the skull — and into tough debates around whether tech companies should access private brain information. Ultimately, though, the company appears to have decided the research simply won’t lead to a product soon enough.

332

Handwriting Is Better Than Typing When Learning a New Language, Study Finds

Researchers tasked 42 adult volunteers with learning the Arabic alphabet from scratch: some through writing it out on paper, some through typing it out on a keyboard, and some through watching and responding to video instructions. Those in the handwriting group not only learned the unfamiliar letters more quickly, but they were also better able to apply their new knowledge in other areas — by using the letters to make new words and to recognize words they hadn’t seen before, for example. While writing, typing, and visual learning were effective at teaching participants to recognize Arabic letters — learners made very few mistakes after six exercise sessions — on average, the writing group needed fewer sessions to get to a good standard.

Researchers then tested the groups to see how the learning could be generalized. In every follow-up test, using skills they hadn’t been trained on, the writing group performed the best: naming letters, writing letters, spelling words, and reading words. The research shows that the benefits of teaching through handwriting go beyond better penmanship: There are also advantages in other areas of language learning. It seems as though the knowledge gets more firmly embedded through writing.

308

YouTube’s Recommender AI Still a Horror Show, Finds Major Crowdsourced Study

For years YouTube’s video-recommending algorithm has stood accused of fuelling a grab bag of societal ills by feeding users an AI-amplified diet of hate speech, political extremism and/or conspiracy junk/disinformation for the profiteering motive of trying to keep billions of eyeballs stuck to its ad inventory. And while YouTube’s tech giant parent Google has, sporadically, responded to negative publicity flaring up around the algorithm’s antisocial recommendations — announcing a few policy tweaks or limiting/purging the odd hateful account — it’s not clear how far the platform’s penchant for promoting horribly unhealthy clickbait has actually been rebooted. The suspicion remains nowhere near far enough.

New research published today by Mozilla backs that notion up, suggesting YouTube’s AI continues to puff up piles of “bottom-feeding”/low-grade/divisive/disinforming content — stuff that tries to grab eyeballs by triggering people’s sense of outrage, sewing division/polarization or spreading baseless/harmful disinformation — which in turn implies that YouTube’s problem with recommending terrible stuff is indeed systemic; a side effect of the platform’s rapacious appetite to harvest views to serve ads. That YouTube’s AI is still — per Mozilla’s study — behaving so badly also suggests Google has been pretty successful at fuzzing criticism with superficial claims of reform. The mainstay of its deflective success here is likely the primary protection mechanism of keeping the recommender engine’s algorithmic workings (and associated data) hidden from public view and external oversight — via the convenient shield of “commercial secrecy.” But regulation that could help crack open proprietary AI blackboxes is now on the cards — at least in Europe.

334

YouTube Stars Were Offered Money to Spread Vaccine Misinformation

“A mysterious marketing agency secretly offered to pay social media stars to spread disinformation about Covid-19 vaccines,” reports the BBC.

“Their plan failed when the influencers went public about the attempt to recruit them.”
An influencer marketing agency called Fazze offered to pay [Mirko Drotschmann, a German YouTuber and journalist] to promote what it said was leaked information that suggested the death rate among people who had the Pfizer vaccine was almost three times that of the AstraZeneca jab. The information provided wasn’t true. It quickly became apparent to Mirko that he was being asked to spread disinformation to undermine public confidence in vaccines in the middle of a pandemic. “I was shocked,” says Mirko “then I was curious, what’s behind all that?” In France, science YouTuber Léo Grasset received a similar offer. The agency offered him 2000 euros if he would take part.

Fazze said it was acting for a client who wished to remain anonymous…

Both Léo and Mirko were appalled by the false claims. They pretended to be interested in order to try to find out more and were provided with detailed instructions about what they should say in their videos. In stilted English, the brief instructed them to “Act like you have the passion and interest in this topic.” It told them not to mention the video had a sponsor — and instead pretend they were spontaneously giving advice out of concern for their viewers… Since Léo and Mirko blew the whistle at least four other influencers in France and Germany have gone public to reveal they also rejected Fazze’s attempts to recruit them.

But German journalist, Daniel Laufer, has identified two influencers who may have taken up the offer.

But who’s behind the mysterious influencer marketing agency?
Fazze is a part of AdNow, which is a digital marketing company, registered in both Russia and the UK. The BBC has made multiple attempts to contact AdNow by phone, email and even a letter couriered to their Moscow headquarters, but they have not responded. Eventually we managed to contact Ewan Tolladay, one of two directors of the British arm of AdNow – who lives in Durham. Mr. Tolladay said he had very little to do with Fazze — which he said was a joint venture between his fellow director — a Russian man called Stanislav Fesenko — and another person whose identity he didn’t know… Both the French and German authorities have launched investigations into Fazze’s approaches to influencers. But the identity of the agency’s mystery client remains unclear.

There has been speculation about the Russian connections to this scandal and the interests of the Russian state in promoting its own vaccine — Sputnik V.

French YouTuber Léo Grasset believes we’ll see more attempts to manipulate public opinion, especially young people — apparently because it’s incredibly easy.

“Just spend the same money on TikTok creators, YouTube creators,” they tell the BBC. “The whole ecosystem is perfectly built for maximum efficiency of disinformation right now.”

343

Facebook Tests Prompts That Ask Users If They’re Worried a Friend is ‘Becoming an Extremist’

Some Facebook users in the United States are being served a prompt that asks if they are worried that someone they know might be becoming an extremist. Others are being notified that they may have been exposed to extremist content.

It is all part of a test the social media company is running that stems from its Redirect Initiative, which aims to combat violent extremism, Andy Stone, a Facebook spokesperson, told CNN. Screen shots of the alerts surfaced on social media Thursday. “This test is part of our larger work to assess ways to provide resources and support to people on Facebook who may have engaged with or were exposed to extremist content, or may know someone who is at risk,” Stone said. “We are partnering with NGOs and academic experts in this space and hope to have more to share in the future,” Stone added. One of the alerts, a screen grab of which made the rounds on social media Thursday, asks users, “Are you concerned that someone you know is becoming an extremist?” “We care about preventing extremism on Facebook,” explained that alert, according to a screen grab posted on social media. “Others in your situation have received confidential support.”

332

The Facebook Loophole that Lets World Leaders Deceive and Harass Their Citizens

Facebook has repeatedly allowed world leaders and politicians to use its platform to deceive the public or harass opponents despite being alerted to evidence of the wrongdoing. The Guardian has seen extensive internal documentation showing how Facebook handled more than 30 cases across 25 countries of politically manipulative behavior that was proactively detected by company staff. The investigation shows how Facebook has allowed major abuses of its platform in poor, small and non-western countries in order to prioritize addressing abuses that attract media attention or affect the US and other wealthy countries. The company acted quickly to address political manipulation affecting countries such as the US, Taiwan, South Korea and Poland, while moving slowly or not at all on cases in Afghanistan, Iraq, Mongolia, Mexico and much of Latin America.

408