Resources

Facebook Threatens To Cut Off Australians From Sharing News

The threat escalates an antitrust battle between Facebook and the Australian government, which wants the social-media giant and Alphabet’s Google to compensate publishers for the value they provide to their platforms. The legislation still needs to be approved by Australia’s parliament. Under the proposal, an arbitration panel would decide how much the technology companies must pay publishers if the two sides can’t agree. Facebook said in a blog posting Monday that the proposal is unfair and would allow publishers to charge any price they want. If the legislation becomes law, the company says it will take the unprecedented step of preventing Australians from sharing news on Facebook and Instagram.

533

YouTube’s Algorithm Made Fake CNN Reports Go Viral

“YouTube channels posing as American news outlets racked up millions of views on false and inflammatory videos over several months this year,” reports CNN.

“All with the help of YouTube’s recommendation engine.”

Many of the accounts, which mostly used footage from CNN, but also employed some video from Fox News, exploited a YouTube feature that automatically creates channels on certain topics. Those topic channels are then automatically populated by videos related to the topic — including, in this case, blatant misinformation.

YouTube has now shut down many of the accounts.

YouTube’s own algorithms also recommended videos from the channels to American users who watched videos about U.S. politics. That the channels could achieve such virality — one channel was viewed more than two million times over one weekend in October — raises questions about YouTube’s preparedness for tackling misinformation on its platform just weeks before the Iowa caucuses and points to the continuing challenge platforms face as people try to game their systems….

Responding to the findings on Thursday, a CNN spokesperson said YouTube needs to take responsibility.

“When accounts were deleted or banned, they were able to spin up new accounts within hours,” added Plasticity, a natural language processing and AI startup which analyzed the data and identified at least 25 different accounts which YouTube then shut down.

“The tactics they used to game the YouTube algorithm were executed perfectly. They knew what they were doing.”

629

Google personalizes search results even when you’re logged out

According to a new study conducted by Google competitor DuckDuckGo, it does not seem possible to avoid personalization when using Google search, even by logging out of your Google account and using the private browsing “incognito” mode.

DuckDuckGo conducted the study in June of this year, at the height of the US midterm election season. It did so with the ostensible goal of confirming whether Google’s search results exacerbate ideological bubbles by feeding you only information you’ve signaled you want to consume via past behavior and the data collected about you. It’s not clear whether that question can be reliably answered with these findings, and it’s also obvious DuckDuckGo is a biased source with something to gain by pointing out how flawed Google’s approach may be. But the study’s findings are nonetheless interesting because they highlight just how much variance there are in Google search results, even when controlling for factors like location.

734

YouTube, the Great Radicalizer

At one point during the 2016 presidential election campaign, I watched a bunch of videos of Donald Trump rallies on YouTube. I was writing an article about his appeal to his voter base and wanted to confirm a few quotations.

Soon I noticed something peculiar. YouTube started to recommend and “autoplay” videos for me that featured white supremacist rants, Holocaust denials and other disturbing content.

Since I was not in the habit of watching extreme right-wing fare on YouTube, I was curious whether this was an exclusively right-wing phenomenon. So I created another YouTube account and started watching videos of Hillary Clinton and Bernie Sanders, letting YouTube’s recommender algorithm take me wherever it would.

Before long, I was being directed to videos of a leftish conspiratorial cast, including arguments about the existence of secret government agencies and allegations that the United States government was behind the attacks of Sept. 11. As with the Trump videos, YouTube was recommending content that was more and more extreme than the mainstream political fare I had started with.

Intrigued, I experimented with nonpolitical topics. The same basic pattern emerged. Videos about vegetarianism led to videos about veganism. Videos about jogging led to videos about running ultramarathons.

It seems as if you are never “hard core” enough for YouTube’s recommendation algorithm. It promotes, recommends and disseminates videos in a manner that appears to constantly up the stakes. Given its billion or so users, YouTube may be one of the most powerful radicalizing instruments of the 21st century.

This is not because a cabal of YouTube engineers is plotting to drive the world off a cliff. A more likely explanation has to do with the nexus of artificial intelligence and Google’s business model. (YouTube is owned by Google.) For all its lofty rhetoric, Google is an advertising broker, selling our attention to companies that will pay for it. The longer people stay on YouTube, the more money Google makes.

What keeps people glued to YouTube? Its algorithm seems to have concluded that people are drawn to content that is more extreme than what they started with — or to incendiary content in general.

Is this suspicion correct? Good data is hard to come by; Google is loath to share information with independent researchers. But we now have the first inklings of confirmation, thanks in part to a former Google engineer named Guillaume Chaslot.

Mr. Chaslot worked on the recommender algorithm while at YouTube. He grew alarmed at the tactics used to increase the time people spent on the site. Google fired him in 2013, citing his job performance. He maintains the real reason was that he pushed too hard for changes in how the company handles such issues.

The Wall Street Journal conducted an investigation of YouTube content with the help of Mr. Chaslot. It found that YouTube often “fed far-right or far-left videos to users who watched relatively mainstream news sources,” and that such extremist tendencies were evident with a wide variety of material. If you searched for information on the flu vaccine, you were recommended anti-vaccination conspiracy videos.

It is also possible that YouTube’s recommender algorithm has a bias toward inflammatory content. In the run-up to the 2016 election, Mr. Chaslot created a program to keep track of YouTube’s most recommended videos as well as its patterns of recommendations. He discovered that whether you started with a pro-Clinton or pro-Trump video on YouTube, you were many times more likely to end up with a pro-Trump video recommended.

Combine this finding with other research showing that during the 2016 campaign, fake news, which tends toward the outrageous, included much more pro-Trump than pro-Clinton content, and YouTube’s tendency toward the incendiary seems evident.

YouTube has recently come under fire for recommending videos promoting the conspiracy theory that the outspoken survivors of the school shooting in Parkland, Fla., are “crisis actors” masquerading as victims. Jonathan Albright, a researcher at Columbia, recently “seeded” a YouTube account with a search for “crisis actor” and found that following the “up next” recommendations led to a network of some 9,000 videos promoting that and related conspiracy theories, including the claim that the 2012 school shooting in Newtown, Conn., was a hoax.

What we are witnessing is the computational exploitation of a natural human desire: to look “behind the curtain,” to dig deeper into something that engages us. As we click and click, we are carried along by the exciting sensation of uncovering more secrets and deeper truths. YouTube leads viewers down a rabbit hole of extremism, while Google racks up the ad sales.

Human beings have many natural tendencies that need to be vigilantly monitored in the context of modern life. For example, our craving for fat, salt and sugar, which served us well when food was scarce, can lead us astray in an environment in which fat, salt and sugar are all too plentiful and heavily marketed to us. So too our natural curiosity about the unknown can lead us astray on a website that leads us too much in the direction of lies, hoaxes and misinformation.

In effect, YouTube has created a restaurant that serves us increasingly sugary, fatty foods, loading up our plates as soon as we are finished with the last meal. Over time, our tastes adjust, and we seek even more sugary, fatty foods, which the restaurant dutifully provides. When confronted about this by the health department and concerned citizens, the restaurant managers reply that they are merely serving us what we want.

This situation is especially dangerous given how many people — especially young people — turn to YouTube for information. Google’s cheap and sturdy Chromebook laptops, which now make up more than 50 percent of the pre-college laptop education market in the United States, typically come loaded with ready access to YouTube.

This state of affairs is unacceptable but not inevitable. There is no reason to let a company make so much money while potentially helping to radicalize billions of people, reaping the financial benefits while asking society to bear so many of the costs.

961

Facebook should be ‘regulated like cigarette industry’, says tech CEO

Facebook should be regulated like a cigarette company, because of the addictive and harmful properties of social media, according to Salesforce chief executive Marc Benioff.

Last week, venture capitalist Roger McNamee – an early investor in Facebook – wrote a Guardian column warning that the company would would have to “address the harm the platform has caused through addiction and exploitation by bad actors”.

“I was once Mark Zuckerberg’s mentor, but I have not been able to speak to him about this. Unfortunately, all the internet platforms are deflecting criticism and leaving their users in peril,” McNamee wrote.

Earlier, Sean Parker, Facebook’s first President, had described the business practice of social media firms as “a social-validation feedback loop … exactly the kind of thing that a hacker like myself would come up with, because you’re exploiting a vulnerability in human psychology”. Parker now describes himself as “something of a conscientious objector” to social media.

As part of its attempt to win back control of the narrative, Facebook has announced it will begin taking into account how trusted a publisher is as part of its News Feed algorithm. The company’s metric for determining trust, however, is a simple two-question survey, causing some to query its potential.

824

How Facebook’s Political Unit Enables the Dark Art of Digital Propaganda

Under fire for Facebook Inc.’s role as a platform for political propaganda, co-founder Mark Zuckerberg has punched back, saying his mission is above partisanship. “We hope to give all people a voice and create a platform for all ideas,” Zuckerberg wrote in September after President Donald Trump accused Facebook of bias. Zuckerberg’s social network is a politically agnostic tool for its more than 2 billion users, he has said. But Facebook, it turns out, is no bystander in global politics. What he hasn’t said is that his company actively works with political parties and leaders including those who use the platform to stifle opposition — sometimes with the aid of “troll armies” that spread misinformation and extremist ideologies.

The initiative is run by a little-known Facebook global government and politics team that’s neutral in that it works with nearly anyone seeking or securing power. The unit is led from Washington by Katie Harbath, a former Republican digital strategist who worked on former New York Mayor Rudy Giuliani’s 2008 presidential campaign. Since Facebook hired Harbath three years later, her team has traveled the globe helping political clients use the company’s powerful digital tools. In some of the world’s biggest democracies — from India and Brazil to Germany and the U.K. — the unit’s employees have become de facto campaign workers. And once a candidate is elected, the company in some instances goes on to train government employees or provide technical assistance for live streams at official state events.

976

How Silicon Valley divided society and made everyone raging mad

“Silicon Valley’s utopians genuinely but mistakenly believe that more information and connection makes us more analytical and informed. But when faced with quinzigabytes of data, the human tendency is to simplify things. Information overload forces us to rely on simple algorithms to make sense of the overwhelming noise. This is why, just like the advertising industry that increasingly drives it, the internet is fundamentally an emotional medium that plays to our base instinct to reduce problems and take sides, whether like or don’t like, my guy/not my guy, or simply good versus evil. It is no longer enough to disagree with someone, they must also be evil or stupid…

Nothing holds a tribe together like a dangerous enemy. That is the essence of identity politics gone bad: a universe of unbridgeable opinion between opposing tribes, whose differences are always highlighted, exaggerated, retweeted and shared. In the end, this leads us to ever more distinct and fragmented identities, all of us armed with solid data, righteous anger, a gutful of anger and a digital network of likeminded people. This is not total connectivity; it is total division.”

869

More than 80% of US Adults now get news on their phones

Mobile devices have rapidly become one of the most common ways for Americans to get news, and the sharpest growth in the past year has been among Americans ages 50 and older, according to a Pew Research Center survey conducted in March.

More than eight-in-ten U.S. adults now get news on a mobile device (85%), compared with 72% just a year ago and slightly more than half in 2013 (54%). And the recent surge has come from older people: Roughly two-thirds of Americans ages 65 and older now get news on a mobile device (67%), a 24-percentage-point increase over the past year and about three times the share of four years ago, when less than a quarter of those 65 and older got news on mobile (22%).

The strong growth carries through to those in the next-highest age bracket. Among 50- to 64-year-olds, 79% now get news on mobile, nearly double the share in 2013. The growth rate was much less steep – or nonexistent – for those younger than 50.”

795
Stare Into The Lights My Pretties

How algorithms (secretly) run the world

“When you browse online for a new pair of shoes, pick a movie to stream on Netflix or apply for a car loan, an algorithm likely has its word to say on the outcome.

The complex mathematical formulas are playing a growing role in all walks of life: from detecting skin cancers to suggesting new Facebook friends, deciding who gets a job, how police resources are deployed, who gets insurance at what cost, or who is on a “no fly” list.

Algorithms are being used—experimentally—to write news articles from raw data, while Donald Trump’s presidential campaign was helped by behavioral marketers who used an algorithm to locate the highest concentrations of “persuadable voters.”

But while such automated tools can inject a measure of objectivity into erstwhile subjective decisions, fears are rising over the lack of transparency algorithms can entail, with pressure growing to apply standards of ethics or “accountability.”

Data scientist Cathy O’Neil cautions about “blindly trusting” formulas to determine a fair outcome.

“Algorithms are not inherently fair, because the person who builds the model defines success,” she said.

O’Neil argues that while some algorithms may be helpful, others can be nefarious. In her 2016 book, “Weapons of Math Destruction,” she cites some troubling examples in the United States:

  • Public schools in Washington DC in 2010 fired more than 200 teachers—including several well-respected instructors—based on scores in an algorithmic formula which evaluated performance.
  • A man diagnosed with bipolar disorder was rejected for employment at seven major retailers after a third-party “personality” test deemed him a high risk based on its algorithmic classification.
  • Many jurisdictions are using “predictive policing” to shift resources to likely “hot spots.” O’Neill says that depending on how data is fed into the system, this could lead to discovery of more minor crimes and a “feedback loop” which stigmatizes poor communities.
  • Some courts rely on computer-ranked formulas to determine jail sentences and parole, which may discriminate against minorities by taking into account “risk” factors such as their neighborhoods and friend or family links to crime.
  • In the world of finance, brokers “scrape” data from online and other sources in new ways to make decisions on credit or insurance. This too often amplifies prejudice against the disadvantaged, O’Neil argues.

Her findings were echoed in a White House report last year warning that algorithmic systems “are not infallible—they rely on the imperfect inputs, logic, probability, and people who design them.”

850

“Social Media” has destroyed discourse

Hossein Derakshan, an Iranian-Canadian author, media analyst, and performance artist writes in MIT Technology Review:

“Like TV, social media now increasingly entertains us, and even more so than television it amplifies our existing beliefs and habits. It makes us feel more than think, and it comforts more than challenges. The result is a deeply fragmented society, driven by emotions, and radicalized by lack of contact and challenge from outside. This is why Oxford Dictionaries designated “post-truth” as the word of 2016: an adjective “relating to circumstances in which objective facts are less influential in shaping public opinion than emotional appeals.”

[…]

Traditional television still entails some degree of surprise. What you see on television news is still picked by human curators, and even though it must be entertaining to qualify as worthy of expensive production, it is still likely to challenge some of our opinions (emotions, that is).

Social media, in contrast, uses algorithms to encourage comfort and complaisance, since its entire business model is built upon maximizing the time users spend inside of it. Who would like to hang around in a place where everyone seems to be negative, mean, and disapproving? The outcome is a proliferation of emotions, a radicalization of those emotions, and a fragmented society. This is way more dangerous for the idea of democracy founded on the notion of informed participation.

This means we should write and read more, link more often, and watch less television and fewer videos — and spend less time on Facebook, Instagram, and YouTube.

Our habits and our emotions are killing us and our planet. Let’s resist their lethal appeal.”

823

Social media and the anti-fact age

Adam Turner at The Age writes:

“When you look at how social media works, it was inevitable that it would turn into one of the world’s most powerful propaganda tools. It’s often painted as a force for good, letting people bypass the traditional gatekeepers in order to quickly disseminate information, but there’s no guarantee that this information is actually true.

Facebook has usurped the role of the mainstream media in disseminating news, but hasn’t taken on the fourth estate’s corresponding responsibility for keeping the bastards honest. The mainstream media has no-one to blame but itself, having engaged in a tabloid race to the bottom which devalued truth to the point that blatant liars are considered more honest.

The fragmentation of news is already creating a filter bubble in that most people don’t tend to read the newspaper from front to back, or sit through entire news bulletins, they just pick and choose what interests them. The trouble with Facebook is that it also reinforces bias, the more extreme your political views the less likely you are to see anything with an opposing viewpoint which might help you develop a more well-rounded view of the world.”

Brooke Binkowski, the managing editor of the fact-checking at Snopes.com says, “Honestly, most of the fake news is incredibly easy to debunk because it’s such obvious bullshit…”

The problem, Binkowski believes, is that the public has lost faith in the media broadly — therefore no media outlet is considered credible any longer. The reasons are familiar: as the business of news has grown tougher, many outlets have been stripped of the resources they need for journalists to do their jobs correctly. “When you’re on your fifth story of the day and there’s no editor because the editor’s been fired and there’s no fact checker so you have to Google it yourself and you don’t have access to any academic journals or anything like that, you will screw stories up,” she says.”

UPDATE 1/12/2016 — Most students can’t spot fake news

“If you thought fake online news was a problem for impressionable adults, it’s even worse for the younger crowd. A Stanford study of 7,804 middle school, high school and college students has found that most of them couldn’t identify fake news on their own. Their susceptibility varied with age, but even a large number of the older students fell prey to bogus reports. Over two thirds of middle school kids didn’t see why they shouldn’t trust a bank executive’s post claiming that young adults need financial help, while nearly 40 percent of high schoolers didn’t question the link between an unsourced photo and the claims attached to it.

Why did many of the students misjudge the authenticity of a story? They were fixated on the appearance of legitimacy, rather than the quality of information. A large photo or a lot of detail was enough to make a Twitter post seem credible, even if the actual content was incomplete or wrong. There are plenty of adults who respond this way, we’d add, but students are more vulnerable than most.

As the Wall Street Journal explains, part of the solution is simply better education: teach students to verify sources, question motivations and otherwise think critically.”

(Emphasis added)

816

An alarming number of people rely *solely* on a Social Media network for news

Note the stats from Pew Research Center for Journalism and Media, that 64% of users surveyed rely on just one source alone of social media for news content—i.e. Facebook, Twitter, YouTube, etc, while 26% would check only two sources, and 10% three or more: A staggeringly concerning trend, given the rampant personalisation of these screen environments and what we know about the functioning and reinforcement of The Filter Bubble. This is a centralisation of power and lack of diversity and compare/contrast that the “old media” perhaps could only dream of…

From The Huffington Post:

“It’s easy to believe you’re getting diverse perspectives when you see stories on Facebook. You’re connected not just to many of your friends, but also to friends of friends, interesting celebrities and publications you “like.”

But Facebook shows you what it thinks you’ll be interested in. The social network pays attention to what you interact with, what your friends share and comment on, and overall reactions to a piece of content, lumping all of these factors into an algorithm that serves you items you’re likely to engage with. It’s a simple matter of business: Facebook wants you coming back, so it wants to show you things you’ll enjoy.”

BBC also reported earlier this year that Social Media networks outstripped television as the news source for young people (emphasis added):

“Of the 18-to-24-year-olds surveyed, 28% cited social media as their main news source, compared with 24% for TV.

The Reuters Institute for the Study of Journalism research also suggests 51% of people with online access use social media as a news source. Facebook and other social media outlets have moved beyond being “places of news discovery” to become the place people consume their news, it suggests.

The study found Facebook was the most common source—used by 44% of all those surveyed—to watch, share and comment on news. Next came YouTube on 19%, with Twitter on 10%. Apple News accounted for 4% in the US and 3% in the UK, while messaging app Snapchat was used by just 1% or less in most countries.

According to the survey, consumers are happy to have their news selected by algorithms, with 36% saying they would like news chosen based on what they had read before and 22% happy for their news agenda to be based on what their friends had read. But 30% still wanted the human oversight of editors and other journalists in picking the news agenda and many had fears about algorithms creating news “bubbles” where people only see news from like-minded viewpoints.

Most of those surveyed said they used a smartphone to access news, with the highest levels in Sweden (69%), Korea (66%) and Switzerland (61%), and they were more likely to use social media rather than going directly to a news website or app.

The report also suggests users are noticing the original news brand behind social media content less than half of the time, something that is likely to worry traditional media outlets.”

And to exemplify the issue, these words from Slashdot: “Over the past few months, we have seen how Facebook’s Trending Topics feature is often biased, and moreover, how sometimes fake news slips through its filter.”

“The Washington Post monitored the website for over three weeks and found that Facebook is still struggling to get its algorithm right. In the six weeks since Facebook revamped its Trending system, the site has repeatedly promoted “news” stories that are actually works of fiction. As part of a larger audit of Facebook’s Trending topics, the Intersect logged every news story that trended across four accounts during the workdays from Aug. 31 to Sept. 22. During that time, we uncovered five trending stories that were indisputably fake and three that were profoundly inaccurate. On top of that, we found that news releases, blog posts from sites such as Medium and links to online stores such as iTunes regularly trended.”

UPDATE 9/11/16 — US President Barack Obama criticises Facebook for spreading fake stories: “The way campaigns have unfolded, we just start accepting crazy stuff as normal,” Obama said. “As long as it’s on Facebook, and people can see it, as long as its on social media, people start believing it, and it creates this dust cloud of nonsense.”

1465
Stare Into The Lights My Pretties

How technology disrupted the truth

Coinciding with a continued rise in public cynicism and a legitimate mistrust of mainstream media beholden to systems of power that are discredited, it seems most people turn to social media networks to get their news now. But this seemingly doesn’t fix the problem. Rather than a “democratisation” of the media and/or a mass reclamation of investigative journalism (as technology pundits continuously purport), there’s arguably been the opposite.

Now, with the convergence of closed social media networks that are beholden to nefarious algorithms such as The Filter Bubble and the personalisation of information, as an article in the Guardian explains, “Social media has swallowed the news – threatening the funding of public-interest reporting and ushering in an era when everyone has their own facts. But the consequences go far beyond journalism.”

“Twenty-five years after the first website went online, it is clear that we are living through a period of dizzying transition. For 500 years after Gutenberg, the dominant form of information was the printed page: knowledge was primarily delivered in a fixed format, one that encouraged readers to believe in stable and settled truths.

Now, we are caught in a series of confusing battles between opposing forces: between truth and falsehood, fact and rumour, kindness and cruelty; between the few and the many, the connected and the alienated; between the open platform of the web as its architects envisioned it and the gated enclosures of Facebook and other social networks; between an informed public and a misguided mob.

What is common to these struggles – and what makes their resolution an urgent matter – is that they all involve the diminishing status of truth. This does not mean that there are no truths. It simply means, as this year has made very clear, that we cannot agree on what those truths are, and when there is no consensus about the truth and no way to achieve it, chaos soon follows.

Increasingly, what counts as a fact is merely a view that someone feels to be true – and technology has made it very easy for these “facts” to circulate with a speed and reach that was unimaginable in the Gutenberg era (or even a decade ago).

Too much of the press often exhibited a bias towards the status quo and a deference to authority, and it was prohibitively difficult for ordinary people to challenge the power of the press. Now, people distrust much of what is presented as fact – particularly if the facts in question are uncomfortable, or out of sync with their own views – and while some of that distrust is misplaced, some of it is not.

In the digital age, it is easier than ever to publish false information, which is quickly shared and taken to be true – as we often see in emergency situations, when news is breaking in real time.”

It’s like the well-oiled tactics of the tobacco industry that have since permeated pretty much all industries—confuse the hell out of people so they don’t know what’s true anymore. It’s a popular PR tactic honed over decades for social control and manipulation of democracy, and it’s that element that exists and is especially reinforced online (particularly in real time), in the giant echo chamber of corporate social media networks, where the user is constantly subjected to streams and streams of information about current events—most devoid of context, analysis, or even significant depth in the time and space of a tweet.

The grounding that gives rise to physical reality and epistemological truths goes missing when we’re tied to screens that simply reflect our projections.

In the words of Sherry Turkle, the issues facing our planet right now cannot be solved in the time-space of texting/tweeting. So if the way we understand, perceive and relate to the world through the prism of media (mainstream media and social media alike) is in decline, it should tell us volumes about the state of democracy…

Global Voices’ adds: “The need for fact-checking hasn’t gone away. As new technologies have spawned new forms of media which lend themselves to the spread of various kinds of disinformation, this need has in fact grown. Much of the information that’s spread online, even by news outlets, is not checked, as outlets simply copy-paste — or in some instances, plagiarise — “click-worthy” content generated by others. Politicians, especially populists prone to manipulative tactics, have embraced this new media environment by making alliances with tabloid tycoons or by becoming media owners themselves.

UPDATE 29/7 — Example, of sorts. “#SaveMarinaJoyce conspiracy theories about British YouTuber go viral.” News reporting social media rumours, facts from source ignite disbelief and cynicism, confirmation bias at work, etc.

898
Stare Into The Lights My Pretties

“Facebook decides which killings we’re allowed to see”

Minutes after a police officer shot Philando Castile in Minnesota, United States, a live video was published on Facebook of the aftermath. Castile was captured in some harrowing detail and streamed to Facebook by his girlfriend Diamond Reynolds, using the live video tool on her smartphone. She narrates the footage with a contrasting mix of eerie calm and anguish. But the video was removed from Facebook due to, as company says, a “technical glitch.” The video has since been restored, but with a “Warning — Graphic Video,” disclaimer.

Now an article has come out commenting on how Facebook has become the “de-facto platform” for such “controversial” videos, and that there’s a pattern in these so called glitches–as they happen very often time after “questionable content” is streamed.

It has long been obvious to anyone paying attention that Facebook operates various nefarious controls over all aspects of how information is displayed and disseminated on their network, not just with advertising and the filter bubble:

“As Facebook continues to build out its Live video platform, the world’s most popular social network has become the de-facto choice for important, breaking, and controversial videos. Several times, Facebook has blocked political or newsworthy content only to later say that the removal was a “technical glitch” or an “error.” Nearly two-thirds of Americans get their news from social media, and two thirds of Facebook users say they use the site to get news. If Facebook is going to become the middleman that delivers the world’s most popular news events to the masses, technical glitches and erroneous content removals could be devastating to information dissemination efforts. More importantly, Facebook has become the self-appointed gatekeeper for what is acceptable content to show the public, which is an incredibly important and powerful position to be in. By censoring anything, Facebook has created the expectation that there are rules for using its platform (most would agree that some rules are necessary). But because the public relies on the website so much, Facebook’s rules and judgments have an outsized impact on public debate.”

784