Resources

Neoliberalism Has Poisoned Our Minds, Study Finds

The dominance of neoliberalism is turning societies against income equality.

At least, that’s according to a study published Tuesday in Perspectives on Psychological Science. A team of researchers at New York University and the American University of Beirut performed an analysis of roughly 20 years of data on from more than 160 countries and found that the dominance of neoliberalism across social and economic institutions has ingrained a widespread acceptance of income inequality across our value systems in turn.

“Our institutions, policies, and laws not only structure our social life but also have a great influence on the kind of people and society we become,” Shahrzad Goudarzi, a Ph.D. candidate at NYU and lead author on the paper, said in a press release.

Goudarzi and her team set out to prove whether conservative British Prime Minister Margaret Thatcher’s 1981 proclamation that economic and political systems can shape “the heart and soul” is indeed true. They defined neoliberalism as the “dominant socioeconomic approach” and the root of “privatization, abolition of the welfare state, and curtailment of redistributive programs,” which has dominated from the 1970s to present day. They measured the strength of a nation’s neoliberalism using the Economic Freedom Index, a metric crafted by the Fraser Institute—a Canadian libertarian think tank—which measures items like “size of government,” “regulation of business, credit, and labor,” and “freedom to trade internationally.”

They evaluated psychological attitudes toward inequality using results from the World Values Survey, taken roughly every four years, which asked respondents globally direct questions about their agreement with statements like, “We need larger income differences as incentives for individual effort,” and “incomes should be made more equal.”

Their analysis found a correlation between the embrace of neoliberalism and the prominence of what social psychology scholars call “equity-based reasoning,” or a preference for merit over a preference for equality: the line of thinking in which material outcomes, like payment, wealth, and social status, should be proportional to inputs, like productivity, effort, ability and time. In short, the dominance of neoliberalism has promoted the belief that the wealthy have earned their spot in society just as much as the poor have.

San Francisco Police Are Using Driverless Cars as Mobile Surveillance Cameras

For the last five years, driverless car companies have been testing their vehicles on public roads. These vehicles constantly roam neighborhoods while laden with a variety of sensors including video cameras capturing everything going on around them in order to operate safely and analyze instances where they don’t.

While the companies themselves, such as Alphabet’s Waymo and General Motors’ Cruise, tout the potential transportation benefits their services may one day offer, they don’t publicize another use case, one that is far less hypothetical: Mobile surveillance cameras for police departments.

The use of AVs as an investigative tool echoes how Ring, a doorbell and home security company owned by Amazon, became a key partner with law enforcement around the country by turning individual consumer products into a network of cameras with comprehensive coverage of American neighborhoods easily accessible to police. Police departments around the country use automatic license plate readers (ALPRs) to track the movements of vehicles. The EFF has sued the SFPD for accessing business improvement district live cameras to spy on protestors.

Virginia Police Routinely Use Secret GPS Pings To Track People’s Cell Phones

The nonprofit online news site Virginia Mercury investigated their state police departments’ “real-time location warrants,” which are “addressed to telephone companies, ordering them to regularly ping a customers’ phone for its GPS location and share the results with police.” Public records requests submitted to a sampling of 18 police departments around the state found officers used the technique to conduct more than 7,000 days worth of surveillance in 2020. Court records show the tracking efforts spanned cases ranging from high-profile murders to minor larcenies…. Seven departments responded that they did not have any relevant billing records, indicating they don’t use the technique. Only one of the departments surveyed, Alexandria, indicated it had an internal policy governing how their officers use cellphone tracking, but a copy of the document provided by the city was entirely redacted….

Drug investigations accounted for more than 60 percent of the search warrants taken out in the two jurisdictions. Larcenies were the second most frequent category. Major crimes like murders, rapes and abductions made up a fraction of the tracking requests, accounting for just under 25 of the nearly 400 warrants filed in the jurisdictions that year.
America’s Supreme Court “ruled that warrantless cellphone tracking is unconstitutional back in 2012,” the article points out — but in practice those warrants aren’t hard to get. “Officers simply have to attest in an affidavit that they have probable cause that the tracking data is ‘relevant to a crime that is being committed or has been committed’…. There’s been limited public discussion or awareness of the kinds of tracking warrants the judiciary is approving.” “I don’t think people know that their cell phones can be converted to tracking devices by police with no notice,” said Steve Benjamin, a criminal defense lawyer in Richmond who said he’s recently noticed an uptick in cases in which officers employed the technique. “And the reality of modern life is everyone has their phone on them during the day and on their nightstand at night. … It’s as if the police tagged them with a chip under their skin, and people have no idea how easily this is accomplished.”
The case for these phone-tracking warrants?

  • The executive director of the Virginia Association of Chiefs of Police tells the site that physical surveillance ofen requires too many resources — and that cellphone tracking is safer. “It may be considered an intrusive way of gathering data on someone, but it’s certainly less dangerous than physical tracking.”
  • A spokesperson for the Chesterfield County police department [responsible for 64% of the state’s tracking] argued that “We exist to preserve human life and protect the vulnerable, and we will use all lawful tools at our disposal to do so.” And they added that such “continued robust enforcement efforts” were a part of the reason that the county’s still-rising number of fatal drug overdoses had not risen more.

The site also obtained bills from four major US cellphone carriers, and reported how much they were charging police for providing their cellphone-tracking services:

  • “T-Mobile charged $30 per day, which comes to $900 per month of tracking.”
  • “AT&T charged a monthly service fee of $100 and an additional $25 per day the service is utilized, which comes to $850 per 30 days of tracking…”
  • “Verizon calls the service ‘periodic location updates,’ charging $5 per day on top of a monthly service fee of $100, which comes to $200 per 30 days of tracking.”
  • “Sprint offered the cheapest prices to report locations back to law enforcement, charging a flat fee of $100 per month.”

Why the Past 10 Years of American Life Have Been Uniquely Stupid

Jonathan Haidt, a social psychologist at the New York University’s School of Business, argues in the Atlantic that social-media platforms “trained users to spend more time performing and less time connecting.” But that was just the beginning.

He now believes this ultimately fueled a viral dynamic leading to “the continual chipping-away of trust” in a democracy which “depends on widely internalized acceptance of the legitimacy of rules, norms, and institutions.”
The most recent Edelman Trust Barometer (an international measure of citizens’ trust in government, business, media, and nongovernmental organizations) showed stable and competent autocracies (China and the United Arab Emirates) at the top of the list, while contentious democracies such as the United States, the United Kingdom, Spain, and South Korea scored near the bottom (albeit above Russia)…. Mark Zuckerberg may not have wished for any of that. But by rewiring everything in a headlong rush for growth — with a naive conception of human psychology, little understanding of the intricacy of institutions, and no concern for external costs imposed on society — Facebook, Twitter, YouTube, and a few other large platforms unwittingly dissolved the mortar of trust, belief in institutions, and shared stories that had held a large and diverse secular democracy together.

In the last 10 years, the article argues, the general public — at least in America — became “uniquely stupid.” And he’s not just speaking about the political right and left, but within both factions, “as well as within universities, companies, professional associations, museums, and even families.” The article quotes former CIA analyst Martin Gurri’s comment in 2019 that the digital revolution has highly fragmented the public into hostile shards that are “mostly people yelling at each other and living in bubbles of one sort or another.”

The article concludes that by now U.S. politics has entered a phase where truth “cannot achieve widespread adherence” and thus “nothing really means anything anymore–at least not in a way that is durable and on which people widely agree.” It even contemplates the idea of “highly believable” disinformation generated by AI, possibly by geopolitical adversaries, ultimately evolving into what the research manager at the Stanford Internet Observatory has described as “an Information World War in which state actors, terrorists, and ideological extremists leverage the social infrastructure underpinning everyday life to sow discord and erode shared reality.”

Because of the Internet, ‘We Are All Cranks Now’

…from the characters of Dickens to Grandpa Simpson, recently it seems that the figure of the crank has dropped away from the public imagination. Now, this could be that the art of penning demented letters to metropolitan newspapers on a semi-regular basis may be dying out with the last generation of people to use lead toothpaste, but I don’t think that’s it. I think we’ve lost sight of them not because they went away, but because they became unremarkable. We are all cranks now.

Online has drastically lowered the barriers of entry into the Order of Crankhood. Time it was when if you really wanted to get publicly steamed about something you’d read, you’d first have to buy a newspaper, read that newspaper, get steamed, go to your writing desk, jot down your letter, put that letter in an envelope, find a stamp, and then walk to the post office. And even after doing all that, there was no guarantee that it would be published. Being a crank even 30 years ago took a kind of monastic dedication to the high art of being a weirdo, but nowadays, saying something deeply unwell about an article you don’t like to thousands of people is as trivial as ordering a coffee.a

And if the internet in general has lowered these barriers, social media has gone a step further. People who never set out to be cranks in the first place are actively incentivized to do so. This isn’t just because whenever you post you get a thrilling little tally of all the people who agree with you, it’s because of how these platforms are designed to maximize engagement. The ideal poster for social media companies is one who posts often, who posts stridently, and who responds to as much stuff as possible.

So, to be on Twitter or Facebook is to sit in a room while someone holds up random pieces of stimulus and demands your appraisal of each. What do we reckon of this? Okay, how about this? And this? What’s your view here? Were you to design a machine to turn otherwise normal, healthy people into cranks — a kind of crankification engine, if you like — you would probably arrive at something like these platforms.

The White House is briefing TikTok stars about the war in Ukraine

This week, the administration began working with Gen Z For Change, a nonprofit advocacy group, to help identify top content creators on the platform to orchestrate a briefing aimed at answering questions about the conflict and the United States’ role in it.

The briefing was led by Matt Miller, a special adviser for communications at the White House National Security Council, and Psaki. The Washington Post obtained a recording of the call, and in it, Biden officials stressed the power these creators had in communicating with their followers. “We recognize this is a critically important avenue in the way the American public is finding out about the latest,” said the White House director of digital strategy, Rob Flaherty, “so we wanted to make sure you had the latest information from an authoritative source.”

Some Amazon Ring Customers Demand Drivers Dance, Then Post Videos Online

Some Amazon customers are now explicitly asking the company’s drivers to deliver a performance along with the package. They are posting signs to their front doors or tapping unusual delivery instructions into the Amazon app in the hopes of capturing a spectacle on their surveillance feeds…. [T]hese customers proceed to shamelessly post the evidence to social media. Sometimes the videos are spun into an online sleuthing opportunity, as the TikToker asks viewers to hunt for the dancing driver’s identity. And they represent just a slice of the “Amazon driver approaches the door” genre of internet video… But whether the video is pitched as heartwarming or sadistic, the customer is enlisting the driver into a nonconsensual pageant that doubles as a performance review. As Jackson reported, Amazon drivers who fail to fulfill customer requests risk demerits.

Amazon encourages customers to publicize their Ring videos on its safety-minded social network, Neighbors, and makes it easy to share them more widely, too. One of Ring’s marketing lines is “A lot happens at your front door,” and this is meant as both a warning and an invitation — though it suggests it is too dangerous to venture outside, it also implies that a whole world of entertainment is to be found through eyeing your surveillance feed. The official Ring YouTube channel is filled with user-generated videos that help inject its growing spy network with warmth and surprise, as the cameras catch spontaneous footage of good Samaritans, grazing cows and, of course, the company’s drivers caught in kooky scenarios, like in this entry from December: “Even a Giant Bear Will Not Stop This Amazon Driver From Making His Delivery.”

Amazon obsessively surveils its workers through dashcams, smartphone monitors and machine-generated report cards, and these videos implicate the customer in that exercise, making the violation of driver privacy into a kind of internet-wide contest. The caption for Amazon’s bear video focuses on the heroic actions of a Ring user named Josh, who supposedly aided the delivery driver’s safety by “watching his exit the whole time” on the security camera…. Its routes are often serviced by precarious gig workers, its quotas are too punishing to allow for socializing, and all potential human interactions have been replaced by one-way surveillance. In many of these TikTok videos, Amazon workers literally run in and out of the frame. If delivery drivers were once lightly teased or frequently ogled, now they are simply dehumanized, plugged into machine-run networks and expected to move product with robotic efficiency. The compulsory dance trend on TikTok suggests that customers, too, have come to see drivers as programmable….

On an even more depressing corner of Amazon TikTok, customers post videos not to backwardly celebrate drivers but just to shame them for delivering the package with less than the customer’s expected level of service.

Silenced AirTags With Disabled Speakers Are Popping Up for Sale Online

The AirTag, a small, easy-to-carry device about the size of a quarter, relies on Apple’s Find My network which leverages millions of Apple devices to discreetly keep tabs on the location of the trackers and report that information back to each tag’s registered user. The general idea behind the AirTag was that users could attach one to their keys, their backpack, or to other valuable items, and be able to quickly locate them if lost. To prevent their misuse, such as using an AirTag to track someone without their knowledge, iOS users would be eventually notified if a tracker registered to someone else was nearby, while Android users would have to rely on an audible beep that would start chirping three days after an AirTag was separated from its owner.

The product was ripe for abuse — a concern we emphasized in our initial review of the AirTags — and a couple of months after their debut Apple addressed those concerns with promised updates that would see Android users getting similar notifications as iOS users when an AirTag was nearby through a new Tracker Detect app that allowed Android users to more easily spot the devices. And the timeframe for when the trackers would start beeping after being away from its registered owner was shortened to a “random time inside a window that lasts between 8 and 24 hours,” according to a CNET report.

Samsung held an event in the metaverse and it didn’t quite go to plan

The South Korean tech giant hosted the event Wednesday on Decentraland, a cryptocurrency-focused virtual world that users can create, explore and trade in. Decentraland, one of many metaverse efforts, is accessed via a desktop browser. Users create an avatar which they can then navigate around the blockchain-powered virtual world using a mouse and keyboard — something that isn’t exactly intuitive for non-gamers. The event specifically took place in Samsung 837X, a virtual building that Samsung has built on Decentraland that’s designed to be a replica of its flagship New York experience center. Samsung 837X is there all the time but there just happened to be an event inside the building’s “Connectivity Theatre” on Wednesday. But CNBC, and many others, struggled to find the 837X building and when we did many of us were unable to gain access to it.

When an avatar is first created on Decentraland, it lands in a sort of atrium where clouds appear to be gliding across the floor. There’s a round pool in the middle that has a worrying vortex in the center. Our avatar was soon surrounded by around 20 others. A chat box in the bottom left-hand corner of the screen was full of messages like “help” and “I hate this game.” One user named claireinnit#87fa, boldly claimed “we’re in the —-in future.” On the opposite side of the intimidating pool, three large boards read “classics, events and crowd.” An ad for Samsung 837X hang on the “crowd” board. Once clicked (easier said than done), you’re then given the option to “jump in.” After jumping in, you’re transported to Samsung’s little world on Decentraland and you can see the 837X building. There’s a pizza store next door, but not much else.

CNBC immediately noticed a large line of people at the main entrance to the 837X building. People were struggling to get in. Some users were getting their avatars to jump on other people’s heads as they clambered to the front of the queue but it didn’t help. The doors wouldn’t open and the chatbox was again full of pleas for help. A rumor circulated that a YouTuber had managed to find a way in, while a CNET journalist wrote on Twitter that they had managed to gain access by switching to the “ATHENA” server. It wasn’t immediately obvious how to do this. “Many people were unable to actually enter Samsung 837X before the event started,” wrote CNET’s Russell Holly. […] After around 30 minutes of trying to access Samsung’s building in the metaverse, CNBC gave up and went back to the real world.

The Internet Gave Rise to ‘Cancel Culture OCD’

Today, the phrase “cancel culture” triggers a wide range of responses: concern, frustration, a bit of eye-rolling. There are endless debates about what it is (accountability or censorship?), what’s driving it (context collapse, perhaps, or a new “woke” religion), and whether it even exists. Few public figures have been successfully canceled; even fewer have stayed canceled. Yet online life remains suffused with a distinct air of paranoia and an often-pacifying doubt—and perhaps focusing on the “cancel” part of cancel culture distracts from its rippling effects in our daily lives. The old saying goes, “If you’ve done nothing wrong, you have nothing to fear.” But the experiences of people with cancellation OCD reveal another truth: Scrutinize yourself too closely and you can always find something wrong.

10 year old boy lives life as a YouTube advertising sensation

There’s no one way to describe what Kaji, who is now 10 years old, has done across his multiple YouTube channels, cable television shows and live appearances: In one video, he is giving you a tour of the Legoland Hotel; in another, he splashes around in his pool to introduce a science video about tsunamis. But for years, what he has mostly done is play with toys: Thomas the Tank Engine, “Paw Patrol” figures, McDonald’s play kitchens. A new toy and a new video for almost every day of the week, adding up to an avalanche of content that can overwhelm your child’s brain, click after click. Kaji has been playing with toys on camera since Barack Obama was in the White House.

Here are a few of the companies that are now paying him handsomely for his services: Amazon, Walmart, Nickelodeon, Skechers. Ryan also has 10 separate YouTube channels, which together make up “Ryan’s World” [31.2M subscribers], a content behemoth whose branded merchandise took in more than $250 million last year. Even conservative estimates suggest that the Kaji family take exceeds $25 million annually.

Mitto Secret Surveillance Operation for Google, Twitter, WhatsApp, Microsoft’s LinkedIn, Telegram, TikTok, Tencent and Alibaba

The co-founder of a company that has been trusted by technology giants including Google and Twitter to deliver sensitive passwords to millions of their customers also operated a service that ultimately helped governments secretly surveil and track mobile phones, Bloomberg reported Monday, citing former employees and clients. From the report:
Since it started in 2013, Mitto AG has established itself as a provider of automated text messages for such things as sales promotions, appointment reminders and security codes needed to log in to online accounts, telling customers that text messages are more likely to be read and engaged with than emails as part of their marketing efforts. Mitto, a closely held company with headquarters in Zug, Switzerland, has grown its business by establishing relationships with telecom operators in more than 100 countries. It has brokered deals that gave it the ability to deliver text messages to billions of phones in most corners of the world, including countries that are otherwise difficult for Western companies to penetrate, such as Iran and Afghanistan. Mitto has attracted major technology giants as customers, including Google, Twitter, WhatsApp, Microsoft’s LinkedIn and messaging app Telegram, in addition to China’s TikTok, Tencent and Alibaba, according to Mitto documents and former employees.

But a Bloomberg News investigation, carried out in collaboration with the London-based Bureau of Investigative Journalism, indicates that the company’s co-founder and chief operating officer, Ilja Gorelik, was also providing another service: selling access to Mitto’s networks to secretly locate people via their mobile phones. That Mitto’s networks were also being used for surveillance work wasn’t shared with the company’s technology clients or the mobile operators Mitto works with to spread its text messages and other communications, according to four former Mitto employees. The existence of the alternate service was known only to a small number of people within the company, these people said. Gorelik sold the service to surveillance-technology companies which in turn contracted with government agencies, according to the employees.

Is the Internet Changing the Way We Remember?

“A study in 2019 found that the spatial memory used for navigating through the world tends to be worse for people who’ve made extensive use of map apps and GPS devices…” reports NBC News.

But that’s just the beginning, according to Adrian Ward, who studies psychology at the University of Texas at Austin. NBC says Ward’s research suggests “People who lean on a search engine such as Google may get the right answers but they can also end up with a wrong idea of how strong their own memory is.”

In Ward’s research, published in October in the journal Proceedings of the National Academy of Sciences of the United States, he used a series of eight experiments to test how people used and thought about their own knowledge as they completed short quizzes of general knowledge. Some participants had access to Google while answering the questions — “What is the most widely spoken language in the world?” was one — while others did not. They also completed surveys. He found that people who used Google were more confident in their own ability to think and remember, and erroneously predicted that they would know significantly more in future quizzes without the help of the internet. Ward attributed that to Google’s design: simple and easy, less like a library and more like a “neural prosthetic” that simulates a search in a human brain.

“The speed makes it so you never understand what you don’t know,” Ward said.

The findings echo and build on earlier research, including a widely cited 2011 paper on the “Google effect”: a phenomenon in which people are less likely to remember information if they know they can find it later on the internet…. In a review of recent studies in the field, published in September, researchers at Duke University found that the “externalization” of memories into digital spheres “changes what people attend to and remember about their own experiences.” Digital media is new and different, they wrote, because of factors such as how easily images are edited or the huge number of memories at people’s fingertips.

Each photographic cue means another chance for a memory to be “updated,” maybe with a false impression, and each manipulation of a piece of social media content is a chance for distortion, wrote the researchers, doctoral student Emmaline Drew Eliseev and Elizabeth Marsh, a professor of psychology and neuroscience and director of a lab dedicated to studying memory.

Facebook exec blames society for COVID misinformation

Longtime Facebook veteran Andrew Bosworth insists that political and COVID-19 misinformation are societal problems rather than issues that have been magnified by social networks.

Facebook and other social networks have played a significant role in vaccine hesitancy and the spread of political misinformation. “Individual humans are the ones who choose to believe or not believe a thing. They are the ones who choose to share or not share a thing,” Bosworth said in an interview with “Axios on HBO.” “I don’t feel comfortable at all saying they don’t have a voice because I don’t like what they said.” Bosworth has been leading Facebook’s hardware efforts, including those in virtual and augmented reality. Next year he will become CTO for Meta, Facebook’s parent company. Asked whether vaccine hesitancy would be the same with or without social media, Bosworth defended Facebook’s role in combatting COVID, noting that the company ran one of the largest information campaigns in the world to spread authoritative information.

How Facebook and Google Actually Fund the Creation of Misinformation

MIT’s Technology Review shares data from a Facebook-run tool called CrowdTangle. It shows that by 2018 in the nation of Myanmar (population: 53 million), ” All the engagement had instead gone to fake news and clickbait websites.

“In a country where Facebook is synonymous with the internet, the low-grade content overwhelmed other information sources.”
[T]he sheer volume of fake news and clickbait acted like fuel on the flames of already dangerously high ethnic and religious tensions. It shifted public opinion and escalated the conflict, which ultimately led to the death of 10,000 Rohingya, by conservative estimates, and the displacement of 700,000 more. In 2018, a United Nations investigation determined that the violence against the Rohingya constituted a genocide and that Facebook had played a “determining role” in the atrocities. Months later, Facebook admitted it hadn’t done enough “to help prevent our platform from being used to foment division and incite offline violence.” Over the last few weeks, the revelations from the Facebook Papers, a collection of internal documents provided to Congress and a consortium of news organizations by whistleblower Frances Haugen, have reaffirmed what civil society groups have been saying for years: Facebook’s algorithmic amplification of inflammatory content, combined with its failure to prioritize content moderation outside the US and Europe, has fueled the spread of hate speech and misinformation, dangerously destabilizing countries around the world.

But there’s a crucial piece missing from the story. Facebook isn’t just amplifying misinformation.

The company is also funding it.

An MIT Technology Review investigation, based on expert interviews, data analyses, and documents that were not included in the Facebook Papers, has found that Facebook and Google are paying millions of ad dollars to bankroll clickbait actors, fueling the deterioration of information ecosystems around the world.

Facebook pays them for permission to open their content within Facebook’s app (where Facebook controls the advertising) rather than having users clickthrough to the publisher’s own web site, reports Technology Review:
Early on, Facebook performed little quality control on the types of publishers joining the program. The platform’s design also didn’t sufficiently penalize users for posting identical content across Facebook pages — in fact, it rewarded the behavior. Posting the same article on multiple pages could as much as double the number of users who clicked on it and generated ad revenue. Clickbait farms around the world seized on this flaw as a strategy — one they still use today… Clickbait actors cropped up in Myanmar overnight. With the right recipe for producing engaging and evocative content, they could generate thousands of U.S. dollars a month in ad revenue, or 10 times the average monthly salary — paid to them directly by Facebook. An internal company document, first reported by MIT Technology Review in October, shows that Facebook was aware of the problem as early as 2019… At one point, as many as 60% of the domains enrolled in Instant Articles were using the spammy writing tactics employed by clickbait farms, the report said…

75% of users who were exposed to clickbait content from farms run in Macedonia and Kosovo had never followed any of the pages. Facebook’s content-recommendation system had instead pushed it into their news feeds.

Technology Review notes that Facebook now pays billions of dollars to the publishers in their program. It’s a long and detailed article, which ultimately concludes that the problem “is now happening on a global scale.”
Thousands of clickbait operations have sprung up, primarily in countries where Facebook’s payouts provide a larger and steadier source of income than other forms of available work. Some are teams of people while others are individuals, abetted by cheap automated tools that help them create and distribute articles at mass scale…

Google is also culpable. Its AdSense program fueled the Macedonia- and Kosovo-based farms that targeted American audiences in the lead-up to the 2016 presidential election. And it’s AdSense that is incentivizing new clickbait actors on YouTube to post outrageous content and viral misinformation.

Reached for comment, a Facebook spokesperson told Technology Review that they’d misunderstood the issue. And the spokesperson also said “we’ve invested in building new expert-driven and scalable solutions to these complex issues for many years, and will continue doing so.”

Google’s spokesperson confirmed examples in the article violated their own policies and removed the content, adding “We work hard to protect viewers from clickbait or misleading content across our platforms and have invested heavily in systems that are designed to elevate authoritative information.”

Notifications Are Driving Us Crazy

We’re on alert overload. Stray comments and offhand requests once shouted across the office now blink and buzz at us from Microsoft Teams and Slack. Our communication has grown fragmented, spread across myriad apps we have to learn, conform to, remember to check.

Meanwhile, personal texts and social-media mentions have bled into the workday after all this time at home, adding another layer of distraction to our time on the clock. Why put your phone on silent if the boss isn’t hovering over you? Our culture has evolved to accommodate rapid communication, says Gloria Mark, a professor of informatics at the University of California, Irvine, and it can be mentally taxing. Many of us struggle to conjure up that brilliant thought that hit right before the notification burst in. “Your memory is just overflowing with information,” she says.

It doesn’t make for great circumstances for getting work done, but there are ways individuals, managers and organizations can contend with the onslaught. Dr. Mark’s research finds people switch screens an average of 566 times a day. Half the time we’re interrupted; the other half we pull ourselves away. Breaks — even mindless ones like scrolling Facebook — can be positive, replenishing our cognitive resources, Dr. Mark says. But when something external diverts our focus, it takes us an average of 25 minutes and 26 seconds to get back to our original task, she has found. (Folks often switch to different projects in between.) And it stresses us out. Research using heart monitors shows that the interval between people’s heart beats becomes more regular when they’re interrupted, a sign they’re in fight-or-flight mode. The onus is on teams and organizations to create new norms, Dr. Mark says. If individuals just up and turn off their notifications they’ll likely be penalized for missing information. Instead, managers should create quiet hours where people aren’t expected to respond. “It’s a matter of relearning how to work,” she says.

How China Uses Western Influencers As Pawns In Its Propaganda War

China is recruiting YouTubers to report on the country in a positive light and counter the West’s increasingly negative perceptions. “The videos have a casual, homespun feel. But on the other side of the camera often stands a large apparatus of government organizers, state-controlled news media and other official amplifiers — all part of the Chinese government’s widening attempts to spread pro-Beijing messages around the planet,” the report says. “State-run news outlets and local governments have organized and funded pro-Beijing influencers’ travel, according to government documents and the creators themselves. They have paid or offered to pay the creators. They have generated lucrative traffic for the influencers by sharing videos with millions of followers on YouTube, Twitter and Facebook.”

Typically, the Chinese government support comes in the form of free organized trips around China, particularly in Xinjiang. By showing the influencers a carefully sanitized image of life in the country, the authorities don’t need to worry about negative stories. They simply make it easy for the YouTubers to present images of jolly peasants and happy city-dwellers, because that’s all they are allowed to see. One of the authors of the New York Times piece, Paul Mozur, noted on Twitter another important way that the authorities are able to help their influencer guests. Once produced, the China-friendly videos are boosted massively by state media and diplomatic Facebook and Twitter accounts: “One video by Israeli influencer Raz Gal-Or portraying Xinjiang as ‘totally normal’ was shared by 35 government connected accounts with a total of 400 million followers. Many were Chinese embassy Facebook accounts, which posted about the video in numerous languages.”

A new report from the Australian Strategic Policy Institute, “Borrowing mouths to speak on Xinjiang,” has some more statistics on this practice: “Our data collection has found that, between January 2020 and August 2021, 156 Chinese state-controlled accounts on US-based social media platforms have published at least 546 Facebook posts, Twitter posts and shared articles from [China Global Television Network], Global Times, Xinhua or China Daily websites that have amplified Xinjiang-related social media content from 13 influencer accounts. More than 50% of that activity occurred on Facebook.” Mozur says that the use of Western influencers in this way also allows employees of Beijing-controlled media, like the journalist Li Jingjing, to present themselves as independent YouTubers. On Twitter, however, she is labeled as “China state-affiliated media.” The Australian Strategic Policy Institute sees this as part of a larger problem (pdf): “labelling schemes adopted by some video-sharing and social media platforms to identify state-affiliated accounts are inconsistently applied to media outlets and journalists working for those outlets. In addition, few platforms appear to have clear policies on content from online influencers or vloggers whose content may be facilitated by state-affiliated media, through sponsored trips, for example.”

According to Mozur, China’s state broadcaster is actively looking for more influencers, offering bonuses and publicity for those who sign up. In the US, China’s consulate general is paying $300,000 to a firm to recruit influencers for the Winter Olympics, ranging from Celebrity Influencers with millions of Instagram or TikTok followers, to Nano Influencers, with merely a few thousand. The ultimate goal of deploying these alternative voices is not to disprove negative stories appearing in Western media, but something arguably worse, as the New York Times report explains: “China is the new super-abuser that has arrived in global social media,” said Eric Liu, a former content moderator for Chinese social media. “The goal is not to win, but to cause chaos and suspicion until there is no real truth.”

AI influencers are taking over

During COP26, Facebook served ads with climate falsehoods, skepticism

Days after Facebook’s vice president of global affairs, Nick Clegg, touted the company’s efforts to combat climate misinformation in a blog as the Glasgow summit began, conservative media network Newsmax ran an ad on Facebook (FB.O) that called man-made global warming a “hoax.” The ad, which had multiple versions, garnered more than 200,000 views.

In another, conservative commentator Candace Owens said, “apparently we’re just supposed to trust our new authoritarian government” on climate science, while a U.S. libertarian think-tank ran an ad on how “modern doomsayers” had been wrongly predicting climate crises for decades. Newsmax, Owens and the Daily Wire, which paid for the ad from Owens’s page, did not respond to requests for comment. Facebook, which recently changed its name to Meta, does not have a specific policy on climate misinformation in ads or unpaid posts. Alphabet’s Google said last month it would no longer allow ads that contradict scientific consensus on climate change on YouTube and its other services, though it would allow content that discusses false claims.

How Facebook and Google fund global misinformation

MIT’s Technology Review shares data from a Facebook-run tool called CrowdTangle. It shows that by 2018 in the nation of Myanmar (population: 53 million), ” All the engagement had instead gone to fake news and clickbait websites.

“In a country where Facebook is synonymous with the internet, the low-grade content overwhelmed other information sources.”
[T]he sheer volume of fake news and clickbait acted like fuel on the flames of already dangerously high ethnic and religious tensions. It shifted public opinion and escalated the conflict, which ultimately led to the death of 10,000 Rohingya, by conservative estimates, and the displacement of 700,000 more. In 2018, a United Nations investigation determined that the violence against the Rohingya constituted a genocide and that Facebook had played a “determining role” in the atrocities. Months later, Facebook admitted it hadn’t done enough “to help prevent our platform from being used to foment division and incite offline violence.” Over the last few weeks, the revelations from the Facebook Papers, a collection of internal documents provided to Congress and a consortium of news organizations by whistleblower Frances Haugen, have reaffirmed what civil society groups have been saying for years: Facebook’s algorithmic amplification of inflammatory content, combined with its failure to prioritize content moderation outside the US and Europe, has fueled the spread of hate speech and misinformation, dangerously destabilizing countries around the world.

But there’s a crucial piece missing from the story. Facebook isn’t just amplifying misinformation.

The company is also funding it.

An MIT Technology Review investigation, based on expert interviews, data analyses, and documents that were not included in the Facebook Papers, has found that Facebook and Google are paying millions of ad dollars to bankroll clickbait actors, fueling the deterioration of information ecosystems around the world.

Facebook pays them for permission to open their content within Facebook’s app (where Facebook controls the advertising) rather than having users clickthrough to the publisher’s own web site, reports Technology Review:
Early on, Facebook performed little quality control on the types of publishers joining the program. The platform’s design also didn’t sufficiently penalize users for posting identical content across Facebook pages — in fact, it rewarded the behavior. Posting the same article on multiple pages could as much as double the number of users who clicked on it and generated ad revenue. Clickbait farms around the world seized on this flaw as a strategy — one they still use today… Clickbait actors cropped up in Myanmar overnight. With the right recipe for producing engaging and evocative content, they could generate thousands of U.S. dollars a month in ad revenue, or 10 times the average monthly salary — paid to them directly by Facebook. An internal company document, first reported by MIT Technology Review in October, shows that Facebook was aware of the problem as early as 2019… At one point, as many as 60% of the domains enrolled in Instant Articles were using the spammy writing tactics employed by clickbait farms, the report said…

75% of users who were exposed to clickbait content from farms run in Macedonia and Kosovo had never followed any of the pages. Facebook’s content-recommendation system had instead pushed it into their news feeds.

Technology Review notes that Facebook now pays billions of dollars to the publishers in their program. It’s a long and detailed article, which ultimately concludes that the problem “is now happening on a global scale.”
Thousands of clickbait operations have sprung up, primarily in countries where Facebook’s payouts provide a larger and steadier source of income than other forms of available work. Some are teams of people while others are individuals, abetted by cheap automated tools that help them create and distribute articles at mass scale…

Google is also culpable. Its AdSense program fueled the Macedonia- and Kosovo-based farms that targeted American audiences in the lead-up to the 2016 presidential election. And it’s AdSense that is incentivizing new clickbait actors on YouTube to post outrageous content and viral misinformation.

Reached for comment, a Facebook spokesperson told Technology Review that they’d misunderstood the issue. And the spokesperson also said “we’ve invested in building new expert-driven and scalable solutions to these complex issues for many years, and will continue doing so.”

Google’s spokesperson confirmed examples in the article violated their own policies and removed the content, adding “We work hard to protect viewers from clickbait or misleading content across our platforms and have invested heavily in systems that are designed to elevate authoritative information.”