Resources

Meet the four-year-old mini-influencer who films own vlogs to 42k subscribers

Paris McKenzie is the tiny influencer who films her own live streams, vlogs and gets her mum to take photos of her in different outfits. She has 42k subscribers and uses her mum’s camera to share her day-to-day life. The confident kid shares a YouTube channel with mum, Jovey Esin, 30, and loves to vlog whenever she can. She loves life in the spotlight so much that even when she’s not filming for their content, Paris has a toy camera that she pretends to vlog on. Jovey says Paris is always asking her to take photos of her in her outfits – and will get her to take them again if they are not up to her standard. Jovey, a video creator, from Brisbane, Australia, said: ‘She loves the camera. She’ll be like, “Mum can you take a photo of me on this background”. ‘And then she’ll want to look at them to check her pose is right after I have taken them. ‘She grew up seeing the camera and now she smiles at it every time it’s out,’ Jovey said. ‘She fell in love with it. She’s always saying, “I want to vlog”.’ Paris now does her own livestreams, and films clips of her day-to-day life. ‘She knows how to turn the camera on and if she’s not using my phone to vlog she’ll be using her toy camera,’ Jovey said.

241

How China Uses Western Influencers As Pawns In Its Propaganda War

China is recruiting YouTubers to report on the country in a positive light and counter the West’s increasingly negative perceptions. “The videos have a casual, homespun feel. But on the other side of the camera often stands a large apparatus of government organizers, state-controlled news media and other official amplifiers — all part of the Chinese government’s widening attempts to spread pro-Beijing messages around the planet,” the report says. “State-run news outlets and local governments have organized and funded pro-Beijing influencers’ travel, according to government documents and the creators themselves. They have paid or offered to pay the creators. They have generated lucrative traffic for the influencers by sharing videos with millions of followers on YouTube, Twitter and Facebook.”

Typically, the Chinese government support comes in the form of free organized trips around China, particularly in Xinjiang. By showing the influencers a carefully sanitized image of life in the country, the authorities don’t need to worry about negative stories. They simply make it easy for the YouTubers to present images of jolly peasants and happy city-dwellers, because that’s all they are allowed to see. One of the authors of the New York Times piece, Paul Mozur, noted on Twitter another important way that the authorities are able to help their influencer guests. Once produced, the China-friendly videos are boosted massively by state media and diplomatic Facebook and Twitter accounts: “One video by Israeli influencer Raz Gal-Or portraying Xinjiang as ‘totally normal’ was shared by 35 government connected accounts with a total of 400 million followers. Many were Chinese embassy Facebook accounts, which posted about the video in numerous languages.”

A new report from the Australian Strategic Policy Institute, “Borrowing mouths to speak on Xinjiang,” has some more statistics on this practice: “Our data collection has found that, between January 2020 and August 2021, 156 Chinese state-controlled accounts on US-based social media platforms have published at least 546 Facebook posts, Twitter posts and shared articles from [China Global Television Network], Global Times, Xinhua or China Daily websites that have amplified Xinjiang-related social media content from 13 influencer accounts. More than 50% of that activity occurred on Facebook.” Mozur says that the use of Western influencers in this way also allows employees of Beijing-controlled media, like the journalist Li Jingjing, to present themselves as independent YouTubers. On Twitter, however, she is labeled as “China state-affiliated media.” The Australian Strategic Policy Institute sees this as part of a larger problem (pdf): “labelling schemes adopted by some video-sharing and social media platforms to identify state-affiliated accounts are inconsistently applied to media outlets and journalists working for those outlets. In addition, few platforms appear to have clear policies on content from online influencers or vloggers whose content may be facilitated by state-affiliated media, through sponsored trips, for example.”

According to Mozur, China’s state broadcaster is actively looking for more influencers, offering bonuses and publicity for those who sign up. In the US, China’s consulate general is paying $300,000 to a firm to recruit influencers for the Winter Olympics, ranging from Celebrity Influencers with millions of Instagram or TikTok followers, to Nano Influencers, with merely a few thousand. The ultimate goal of deploying these alternative voices is not to disprove negative stories appearing in Western media, but something arguably worse, as the New York Times report explains: “China is the new super-abuser that has arrived in global social media,” said Eric Liu, a former content moderator for Chinese social media. “The goal is not to win, but to cause chaos and suspicion until there is no real truth.”

402

AI influencers are taking over

404

YouTube Stars Were Offered Money to Spread Vaccine Misinformation

“A mysterious marketing agency secretly offered to pay social media stars to spread disinformation about Covid-19 vaccines,” reports the BBC.

“Their plan failed when the influencers went public about the attempt to recruit them.”
An influencer marketing agency called Fazze offered to pay [Mirko Drotschmann, a German YouTuber and journalist] to promote what it said was leaked information that suggested the death rate among people who had the Pfizer vaccine was almost three times that of the AstraZeneca jab. The information provided wasn’t true. It quickly became apparent to Mirko that he was being asked to spread disinformation to undermine public confidence in vaccines in the middle of a pandemic. “I was shocked,” says Mirko “then I was curious, what’s behind all that?” In France, science YouTuber Léo Grasset received a similar offer. The agency offered him 2000 euros if he would take part.

Fazze said it was acting for a client who wished to remain anonymous…

Both Léo and Mirko were appalled by the false claims. They pretended to be interested in order to try to find out more and were provided with detailed instructions about what they should say in their videos. In stilted English, the brief instructed them to “Act like you have the passion and interest in this topic.” It told them not to mention the video had a sponsor — and instead pretend they were spontaneously giving advice out of concern for their viewers… Since Léo and Mirko blew the whistle at least four other influencers in France and Germany have gone public to reveal they also rejected Fazze’s attempts to recruit them.

But German journalist, Daniel Laufer, has identified two influencers who may have taken up the offer.

But who’s behind the mysterious influencer marketing agency?
Fazze is a part of AdNow, which is a digital marketing company, registered in both Russia and the UK. The BBC has made multiple attempts to contact AdNow by phone, email and even a letter couriered to their Moscow headquarters, but they have not responded. Eventually we managed to contact Ewan Tolladay, one of two directors of the British arm of AdNow – who lives in Durham. Mr. Tolladay said he had very little to do with Fazze — which he said was a joint venture between his fellow director — a Russian man called Stanislav Fesenko — and another person whose identity he didn’t know… Both the French and German authorities have launched investigations into Fazze’s approaches to influencers. But the identity of the agency’s mystery client remains unclear.

There has been speculation about the Russian connections to this scandal and the interests of the Russian state in promoting its own vaccine — Sputnik V.

French YouTuber Léo Grasset believes we’ll see more attempts to manipulate public opinion, especially young people — apparently because it’s incredibly easy.

“Just spend the same money on TikTok creators, YouTube creators,” they tell the BBC. “The whole ecosystem is perfectly built for maximum efficiency of disinformation right now.”

423

New Survey Reveals Teens Get Their News from Social Media and YouTube

Celebrities, influencers, and personalities have as much influence as a source of current events as friends, family, and news organizations.

Teens today are not only getting the majority of their news online, but they are turning away from traditional media organizations to find out about current events on social media sites and YouTube, often from online influencers and celebrities, according to a new poll by Common Sense and SurveyMonkey.

The survey found that more than half of teens (54%) get news at least a few times a week from social media platforms such as Instagram, Facebook, and Twitter and 50% get news from YouTube.

Teens’ news habits reflect the diversity of the modern media landscape. And, while most news organizations maintain accounts on social media and other platforms, they are competing for attention with corporate brands, celebrities, influencers, and personal connections. Of those teens who get their news from YouTube, for example, six in 10 say they are more likely to get it from celebrities, influencers, and personalities rather than from news organizations utilizing the platform.

What’s noteworthy is that, even with so many relying on alternative sources for the majority of their news, teens are more confident in the news they get directly from news organizations. Of teens who get news of current events from news organizations, 65% say it helps them better understand what is going on. In contrast, just 53% of teens who get news from social media say it helps them better understand what is going on, while 19 percent say it has made them more confused about current events.

Amid ongoing concerns about the impact of information disseminated through social media on elections, older teens’ news habits may have political implications. Of the teens age 16 and 17 who say they’ll be eligible to vote in the 2020 election, 85% are likely to cast a ballot, including 61% who say they’re “very likely.”

“These findings raise concerns about what kind of news the next generation is using to shape their decisions,” said James Steyer, CEO of Common Sense. “There are few standards for what constitutes news and how accurately it’s portrayed on the platforms teens use. With the 2020 election coming up, we need to make sure teens are getting their news from reliable sources, thinking critically, and making informed decisions.”

This latest survey is part of a Common Sense partnership with SurveyMonkey to examine media and technology trends affecting kids and their parents and to share actionable data and insights with families.

“While it’s notable that teens rely heavily on platforms such as Facebook and YouTube to stay informed, their reliance on news from celebrities and influencers rather than journalists may have pernicious implications,” said Jon Cohen, chief research officer at SurveyMonkey. “It’s a bit of a paradox: Overwhelmingly teens say they are interested in keeping up with the news, but they’re not seeking out either traditional or new media to do so.”

Selected key findings

  1. A large majority of teens age 13 to 17 in the U.S. (78%) say it’s important to them to follow current events.
  2. Teens get their news more frequently from social media sites (e.g., Facebook and Twitter) or from YouTube than directly from news organizations. More than half of teens (54%) get news from social media, and 50% get news from YouTube at least a few times a week. Fewer than half, 41%, get news reported by news organizations in print or online at least a few times a week, and only 37% get news on TV at least a few times a week.
  3. YouTube recommendations drive news consumption. Among all teens who get their news from YouTube—regardless of how often—exactly half (50%) say they most often find news on YouTube because it was recommended by YouTube itself (i.e., as a “watch next” video or in the sidebar). Almost half as many (27%) say they follow or subscribe to a specific channel for news on YouTube, and fewer say they find their news on YouTube through search (10%) or because it was shared by someone they know in real life (7%).
  4. Sixty percent of teens who get news from YouTube say they are more likely to get it from celebrities, influencers, and personalities as compared to news organizations (39%). The difference is even more apparent among daily YouTube news consumers (71% vs. 28%).
  5. Nearly two in three teens (65%) who get news directly from news organizations say doing so has helped them better understand current events, compared with 59% of teens who get their news from YouTube (56%) and 53% who get their news from social media sites (53%). Nearly two in 10 teens (19%) say that getting news from social media has made them more confused about current events.
  6. Teens clearly prefer a visual medium for learning about the news. A majority (64%) say that “seeing pictures and video showing what happened” gives them the best understanding of major news events, while just 36% say they’d prefer to read or hear the facts about what happened.
  7. Politically, teens are more likely to be moderate and identify as Democrats, but they are open to ideas from sources whose opinions differ from their own. Just under half of teens (45%) say they get news from sources that have views different from their own once a week or more, and only 14% say they never get news from sources with different views. Slightly fewer (35%) say they discuss political issues with people who have different views once a week or more, and 19% say they never discuss politics with people who have opposing views.

The study comes on the heels of the release of Common Sense’s revamped Digital Citizenship Curriculum, which gives teachers lessons to help students develop skills to be critical consumers of news at a time when they are navigating a fast-changing digital terrain fraught with fake media, hate speech, cyberbullying, and constant digital distraction.

Methodology: This SurveyMonkey Audience survey was conducted June 14 to 25, 2019, among 1,005 teenagers age 13 to 17 in the United States. Respondents for these surveys were selected from more than 2 million people who take surveys on the SurveyMonkey platform each day. The modeled error estimate for the full sample is +/-4.0 percentage points. Data has been weighted for age and sex using the Census Bureau’s American Community Survey to reflect the demographic composition of people in the United States age 13 to 17. Find the full survey results and more information about Common Sense research here.

568

The Rise of the Deepfake and the threat to Democracy

Deepfakes posted on the internet in the past two years, has alarmed many observers, who believe the technology could be used to disgrace politicians and even swing elections. Democracies appear to be gravely threatened by the speed at which disinformation can be created and spread via social media, where the incentive to share the most sensationalist content outweighs the incentive to perform the tiresome work of verification.

Last month, a digitally altered video showing Nancy Pelosi, the speaker of the US House of Representatives, appearing to slur drunkenly through a speech was widely shared on Facebook and YouTube. Trump then posted the clip on Twitter with the caption: “PELOSI STAMMERS THROUGH NEWS CONFERENCE”. The video was quickly debunked, but not before it had been viewed millions of times; the president did not delete his tweet, which at the time of writing has received nearly 98,000 likes. Facebook declined to take down the clip, qualifying its decision with the statement: “Once the video was fact-checked as false, we dramatically reduced its distribution.”

In response, a team including the artists Bill Posters and Daniel Howe two weeks ago posted a video on Instagram, in which Facebook founder Mark Zuckerberg boasts that he has “total control of billions of people’s stolen data, all their secrets, their lives, their futures”.

In May 2018, a Flemish socialist party called sp.a posted a deepfake video to its Twitter and Facebook pages showing Trump appearing to taunt Belgium for remaining in the Paris climate agreement. The video, which remains on the party’s social media, is a poor forgery: Trump’s hair is curiously soft-focus, while his mouth moves with a Muppet-like elasticity. Indeed, the video concludes with Trump saying: “We all know that climate change is fake, just like this video,” although this sentence alone is not subtitled in Flemish Dutch. (The party declined to comment, but a spokesperson previously told the site Politico that it commissioned the video to “draw attention to the necessity to act on climate change”.)

But James [founder of the YouTube channel derpfakes’ that publishes deepfake videos] believes forgeries may have gone undetected. “The idea that deepfakes have already been used politically isn’t so farfetched,” he says. “It could be the case that deepfakes have already been widely used for propaganda.”

673
Stare Into The Lights My Pretties

Children ‘at risk of robot influence’

Forget peer pressure, future generations are more likely to be influenced by robots, a study suggests.

The research, conducted at the University of Plymouth, found that while adults were not swayed by robots, children were.

The fact that children tended to trust robots without question raised ethical issues as the machines became more pervasive, said researchers.

They called for the robotics community to build in safeguards for children.

Those taking part in the study completed a simple test, known as the Asch paradigm, which involved finding two lines that matched in length.

Known as the conformity experiment, the test has historically found that people tend to agree with their peers even if individually they have given a different answer.

In this case, the peers were robots. When children aged seven to nine were alone in the room, they scored an average of 87% on the test. But when the robots joined them, their scores dropped to 75% on average. Of the wrong answers, 74% matched those of the robots.

“If robots can convince children (but not adults) that false information is true, the implication for the planned commercial exploitation of robots for childminding and teaching is problematic.”

784

As Google Maps Renames Neighbourhoods, Residents Fume

For decades, the district south of downtown and alongside San Francisco Bay here was known as either Rincon Hill, South Beach or South of Market. This spring, it was suddenly rebranded on Google Maps to a name few had heard: the East Cut. The peculiar moniker immediately spread digitally, from hotel sites to dating apps to Uber, which all use Google’s map data. The name soon spilled over into the physical world, too. Real-estate listings beckoned prospective tenants to the East Cut. And news organizations referred to the vicinity by that term.

“It’s degrading to the reputation of our area,” said Tad Bogdan, who has lived in the neighborhood for 14 years. In a survey of 271 neighbors that he organized recently, he said, 90 percent disliked the name. The swift rebranding of the roughly 170-year-old district is just one example of how Google Maps has now become the primary arbiter of place names. With decisions made by a few Google cartographers, the identity of a city, town or neighborhood can be reshaped, illustrating the outsize influence that Silicon Valley increasingly has in the real world.

729