Resources

Facebook Will Pay Users To Log Off Before 2020 Election

Facebook is offering users money to refrain from using the site and Instagram in the weeks leading up to the bitterly contested November elections. The New York Post reports:
To assess the impact of social media on voting, the company will pay selected members up to $120 to deactivate their accounts beginning at the end of September. “Anyone who chooses to opt-in — whether it’s completing surveys or deactivating FB or IG for a period of time — will be compensated,” Facebook spokesperson Liz Bourgeois tweeted last week. “This is fairly standard for this type of academic research.” The Silicon Valley giant said it expects 200,000 to 400,000 people to take part.

“Representative, scientific samples of people in the US will be selected and invited to participate in the study. Some potential participants will see a notice in Facebook or Instagram inviting them to take part in the study,” Facebook said. “Study samples will be designed to ensure that participants mirror the diversity of the US adult population, as well as users of Facebook and Instagram.” The results of the study are expected to be released sometime next year.

Facebook Accused of Watching Instagram Users Through Cameras

Facebook is again being sued for allegedly spying on Instagram users, this time through the unauthorized use of their mobile phone cameras. Bloomberg reports:
The lawsuit springs from media reports in July that the photo-sharing app appeared to be accessing iPhone cameras even when they weren’t actively being used. Facebook denied the reports and blamed a bug, which it said it was correcting, for triggering what it described as false notifications that Instagram was accessing iPhone cameras.

In the complaint filed Thursday in federal court in San Francisco, New Jersey Instagram user Brittany Conditi contends the app’s use of the camera is intentional and done for the purpose of collecting “lucrative and valuable data on its users that it would not otherwise have access to.” By “obtaining extremely private and intimate personal data on their users, including in the privacy of their own homes,” Instagram and Facebook are able to collect “valuable insights and market research,” according to the complaint.

A Whistleblower Says Facebook Ignored Global Political Manipulation

Facebook ignored or was slow to act on evidence that fake accounts on its platform have been undermining elections and political affairs around the world, according to an explosive memo sent by a recently fired Facebook employee and obtained by BuzzFeed News. From the report:
The 6,600-word memo, written by former Facebook data scientist Sophie Zhang, is filled with concrete examples of heads of government and political parties in Azerbaijan and Honduras using fake accounts or misrepresenting themselves to sway public opinion. In countries including India, Ukraine, Spain, Bolivia, and Ecuador she found evidence of coordinated campaigns of varying sizes to boost or hinder political candidates or outcomes, though she did not always conclude who was behind them. “In the three years I’ve spent at Facebook, I’ve found multiple blatant attempts by foreign national governments to abuse our platform on vast scales to mislead their own citizenry, and caused international news on multiple occasions,” wrote Zhang, who declined to talk to BuzzFeed News. Her Linkedin profile said she “worked as the data scientist for the Facebook Site Integrity fake engagement team” and dealt with “bots influencing elections and the like.”

“I have personally made decisions that affected national presidents without oversight, and taken action to enforce against so many prominent politicians globally that I’ve lost count,” she wrote. The memo is a damning account of Facebook’s failures. It’s the story of Facebook abdicating responsibility for malign activities on its platform that could affect the political fate of nations outside the United States or Western Europe. It’s also the story of a junior employee wielding extraordinary moderation powers that affected millions of people without any real institutional support, and the personal torment that follow

Hate Speech on Facebook Is Pushing Ethiopia Dangerously Close To a Genocide

Ethnic violence set off by the assassination of a popular singer has been supercharged by hate speech and incitements shared widely on the platform. From a report:
Throughout his life, Ethiopian singer Hachalu Hundessa sang about love, unity, and raising the marginalized voices of his Oromo ethnic group. He had always tried to keep his work and politics separate, saying, “Art should not be subject to political pressure.” But it became increasingly difficult for him to keep these two worlds apart, thanks to a politically-motivated disinformation campaign orchestrated on Facebook through a network of newly created pages and designed to demonize Hundessa. The incendiary campaign claimed Hundessa abandoned his Oromo roots in siding with Prime Minister Ahmed Abiy. Abiy, Ethiopia’s first Oromo leader, has been heavily criticized by hard-line Oromo nationalists who believe he has abandoned his heritage by appeasing other ethnic groups. The impact was devastating.

Hundessa was assassinated on June 29 while driving through the capital Addis Ababa. The man police charged with Hundessa’s killing told prosecutors that he was working as an assassin for the Oromo Liberation Front, an armed nationalist group linked to numerous violent attacks — and who told the shooter that Oromia would benefit from the death of one of its most famous singers. Hundessa’s death at age 34 set off a wave of violence in the capital and his home region of Oromia. Hundreds of people were killed, with minorities like Christian Amharas, Christian Oromos, and Gurage people suffering the biggest losses. This bloodshed was supercharged by the almost-instant and widespread sharing of hate speech and incitement to violence on Facebook, which whipped up people’s anger. Mobs destroyed and burned property. They lynched, beheaded, and dismembered their victims. The calls for violence against a variety of ethnic and religious groups happened despite the government shutting down the internet within hours of Hundessa’s murder. Soon, the same people who’d been calling for genocide and attacks against specific religous or ethnic groups were openly posting photographs of burned-out cars, buildings, schools and houses, the Network Against Hate Speech, a volunteer group tracking hate speech in Ethiopia, told VICE News.

These attacks reflect the volatile nature of ethnic politics in Ethiopia. Abiy’s rise to power in 2018 led to a brief period of hope that Ethiopia could be unified under the first Oromo to lead the country. But that quickly evaporated, and the country has since been wracked by violence, coinciding with a rapid increase in access to the internet, where Facebook dominates. And rather than helping to unify the country, Facebook has simply amplified existing tensions on a massive scale.

US Teens Are Being Paid to Spread Disinformation on Social Media

The Washington Post covered “a sprawling yet secretive campaign that experts say evades the guardrails put in place by social media companies to limit online disinformation of the sort used by Russia” during America’s last presidential campaign in 2016.

According to four people with knowledge of the effort, “Teenagers, some of them minors, are being paid to pump out the messages…”
The campaign draws on the spam-like behavior of bots and trolls, with the same or similar language posted repeatedly across social media. But it is carried out, at least in part, by humans paid to use their own accounts, though nowhere disclosing their relationship with Turning Point Action or the digital firm brought in to oversee the day-to-day activity. One user included a link to Turning Point USA’s website in his Twitter profile until The Washington Post began asking questions about the activity. In response to questions from The Post, Twitter on Tuesday suspended at least 20 accounts involved in the activity for “platform manipulation and spam.” Facebook also removed a number of accounts as part of what the company said is an ongoing investigation…

The months-long effort by the tax-exempt nonprofit is among the most ambitious domestic influence campaigns uncovered this election cycle, said experts tracking the evolution of deceptive online tactics. “In 2016, there were Macedonian teenagers interfering in the election by running a troll farm and writing salacious articles for money,” said Graham Brookie, director of the Atlantic Council’s Digital Forensic Research Lab. “In this election, the troll farm is in Phoenix….”

The messages — some of them false and some simply partisan — were parceled out in precise increments as directed by the effort’s leaders, according to the people with knowledge of the highly coordinated activity, most of whom spoke on the condition of anonymity to protect the privacy of minors carrying out the work… The messages have appeared mainly as replies to news articles about politics and public health posted on social media. They seek to cast doubt on the integrity of the electoral process, asserting that Democrats are using mail balloting to steal the election — “thwarting the will of the American people,” they alleged. The posts also play down the threat from covid-19, which claimed the life of Turning Point’s co-founder Bill Montgomery in July…

By seeking to rebut mainstream news articles, the operation illustrates the extent to which some online political activism is designed to discredit the media. While Facebook and Twitter have pledged to crack down on what they have labeled coordinated inauthentic behavior, in Facebook’s case, and platform manipulation and spam, as Twitter defines its rules, their efforts falter in the face of organizations willing to pay users to post on their own accounts, maintaining the appearance of independence and authenticity.

One parent even said their two teenagers had been posting the messages since June as “independent contractors” — while being paid less than minimum wage.

Facebook Threatens To Cut Off Australians From Sharing News

The threat escalates an antitrust battle between Facebook and the Australian government, which wants the social-media giant and Alphabet’s Google to compensate publishers for the value they provide to their platforms. The legislation still needs to be approved by Australia’s parliament. Under the proposal, an arbitration panel would decide how much the technology companies must pay publishers if the two sides can’t agree. Facebook said in a blog posting Monday that the proposal is unfair and would allow publishers to charge any price they want. If the legislation becomes law, the company says it will take the unprecedented step of preventing Australians from sharing news on Facebook and Instagram.

Cory Doctorow’s New Book Explains ‘How to Destroy Surveillance Capitalism’

If we’re going to break Big Tech’s death grip on our digital lives, we’re going to have to fight monopolies. That may sound pretty mundane and old-fashioned, something out of the New Deal era, while ending the use of automated behavioral modification feels like the plotline of a really cool cyberpunk novel… But trustbusters once strode the nation, brandishing law books, terrorizing robber barons, and shattering the illusion of monopolies’ all-powerful grip on our society. The trustbusting era could not begin until we found the political will — until the people convinced politicians they’d have their backs when they went up against the richest, most powerful men in the world. Could we find that political will again…?

That’s the good news: With a little bit of work and a little bit of coalition building, we have more than enough political will to break up Big Tech and every other concentrated industry besides. First we take Facebook, then we take AT&T/WarnerMedia. But here’s the bad news: Much of what we’re doing to tame Big Tech instead of breaking up the big companies also forecloses on the possibility of breaking them up later… Allowing the platforms to grow to their present size has given them a dominance that is nearly insurmountable — deputizing them with public duties to redress the pathologies created by their size makes it virtually impossible to reduce that size. Lather, rinse, repeat: If the platforms don’t get smaller, they will get larger, and as they get larger, they will create more problems, which will give rise to more public duties for the companies, which will make them bigger still.

We can work to fix the internet by breaking up Big Tech and depriving them of monopoly profits, or we can work to fix Big Tech by making them spend their monopoly profits on governance. But we can’t do both. We have to choose between a vibrant, open internet or a dominated, monopolized internet commanded by Big Tech giants that we struggle with constantly to get them to behave themselves…

Big Tech wired together a planetary, species-wide nervous system that, with the proper reforms and course corrections, is capable of seeing us through the existential challenge of our species and planet. Now it’s up to us to seize the means of computation, putting that electronic nervous system under democratic, accountable control.

With “free, fair, and open tech” we could then tackle our other urgent problems “from climate change to social change” — all with collective action, Doctorow argues. And “The internet is how we will recruit people to fight those fights, and how we will coordinate their labor.

“Tech is not a substitute for democratic accountability, the rule of law, fairness, or stability — but it’s a means to achieve these things.”

Facebook and Google Serve As Vectors For Misinformation While Hobbling Local Journalism and Collecting Taxpayer Subsidies, Group Says

Facebook and Google are hollowing out local communities by serving as vectors for misinformation while hobbling local journalism and collecting taxpayer subsidies, a new paper from progressive think tank the American Economic Liberties Project charges. Both companies cite benefits their platforms offer small businesses as a key defense against critiques of their size and power. The paper, dated Aug. 30, is sure to presage further scrutiny of the impact they’ve had on local communities.

The brief, by Pat Garofalo, the group’s director of state and local policy, argues that: Google doesn’t do enough to protect against fraud, allowing scammers to get their own numbers and websites listed on Google to the detriment of legitimate businesses. Facebook, by design, boosts shoddy and sensationalist content, crowding out legitimate local news and information, all as it and Google have come to dominate the local advertising market that was long the lifeblood of community journalism. Both have sucked up potentially billions in local taxpayer dollars via tax breaks as well as subsidies and discounts on utilities they’ve gotten in exchange for building data centers. Garofalo recommends remedies including more antitrust enforcement at the federal and state levels and an end to preferential treatment by states and localities, either voluntarily or under force of law.

Instagram Wasn’t Removing Photos and Direct Messages From Its Servers

A security researcher was awarded a $6,000 bug bounty payout after he found Instagram retained photos and private direct messages on its servers long after he deleted them.

Independent security researcher Saugat Pokharel found that when he downloaded his data from Instagram, a feature it launched in 2018 to comply with new European data rules, his downloaded data contained photos and private messages with other users that he had previously deleted. It’s not uncommon for companies to store freshly deleted data for a time until it can be properly scrubbed from its networks, systems and caches. Instagram said it takes about 90 days for deleted data to be fully removed from its systems. But Pokharel found that his ostensibly deleted data from more than a year ago was still stored on Instagram’s servers, and could be downloaded using the company’s data download tool. Pokharel reported the bug in October 2019 through Instagram’s bug bounty program. The bug was fixed earlier this month, he said.

Study: US Adults Who Mostly Rely On Social Media For News Are Less Informed, Exposed To More Conspiracy Theories

According to a new report from Pew Research, U.S. adults who get their news largely from social media platforms tend to follow the news less closely and end up less informed on several key subjects when compared to those who use other sources, like TV, radio, and news publications.

The firm first asked people how they most commonly get their news. About one-in-five (18%) said they mostly use social media to stay current. That’s close the percentages of those who say they use local TV (16%) or cable TV (16%) news, but fewer than those who say they go directly to a news website or app (25%). Another 13% said they use network TV and only 3% said they read a newspaper. To be clear, any study that asks users to self-report how they do something isn’t going to be as useful as those that collect hard data on what the consumers actually do. In other words, people who think they’re getting most of their news from TV may be, in reality, undercounting the time they spent on social media â” or vice versa.

That said, among this group of “primarily” social media news consumers, only 8% said they were following the key news story of the 2020 U.S. election “every closely,” compared with 37% of cable TV viewers who said the same, or the 33% of print users who also said this. The social media group, on this topic, was closer to the local TV group (11%). On the topic of the Coronavirus outbreak, only around a quarter (23%) of the primarily social media news consumers said they were following news of COVID-19 “very closely.” All other groups again reported a higher percentage, including those who primarily used cable TV (50%), national network TV (50%), news websites and apps (44%), and local TV (32%) for news.

Related to this finding, the survey respondents were also asked 29 different fact-based questions about news topics from recent days, including those on Trump’s impeachment, the COVID-19 outbreak, and others. Those who scored the lowest on these topics were the consumers who said they primarily used social media to get their news. Across 9 questions related to foundational political knowledge, only 17% of primarily social media news consumers scored “high political knowledge,” meaning they got 8 to 9 of the questions right. 27% scored “middle political knowledge” (6-7 right) and 57% scored “low political knowledge” (5 or fewer right.) The only group that did worse were those who primarily relied on local TV. 45% of who got their news from news primarily via websites and apps, meanwhile, had “high political knowledge,” compared with 42% for radio, 41% for print, 35% for cable TV, and 29% for network TV. The social media group of news consumers was also more exposed to fringe conspiracies, like the idea that the pandemic was intentionally planned.

To Keep Trump From Violating Its Rules…Facebook Rewrote the Rules

After Trump’s infamous “the shooting starts” post, Facebook deputies contacted the White House “with an urgent plea to tweak the language of the post or simply delete it,” the article reveals, after which Trump himself called Mark Zuckerberg. (The article later notes that historically Facebook makes a “newsworthiness exception” for some posts which it refuses to remove, “determined on a case-by-case basis, with the most controversial calls made by Zuckerberg.”) And in the end, Facebook also decided not to delete that post — and says now that even Friday’s newly-announced policy changes still would not have disqualified the post:
The frenzied push-pull was just the latest incident in a five-year struggle by Facebook to accommodate the boundary-busting ways of Trump. The president has not changed his rhetoric since he was a candidate, but the company has continually altered its policies and its products in ways certain to outlast his presidency. Facebook has constrained its efforts against false and misleading news, adopted a policy explicitly allowing politicians to lie, and even altered its news feed algorithm to neutralize claims that it was biased against conservative publishers, according to more than a dozen former and current employees and previously unreported documents obtained by The Washington Post. One of the documents shows it began as far back as 2015…

The concessions to Trump have led to a transformation of the world’s information battlefield. They paved the way for a growing list of digitally savvy politicians to repeatedly push out misinformation and incendiary political language to billions of people. It has complicated the public understanding of major events such as the pandemic and the protest movement, as well as contributed to polarization. And as Trump grew in power, the fear of his wrath pushed Facebook into more deferential behavior toward its growing number of right-leaning users, tilting the balance of news people see on the network, according to the current and former employees…

Facebook is also facing a slow-burning crisis of morale, with more than 5,000 employees denouncing the company’s decision to leave Trump’s post that said, “when the looting starts, the shooting starts,” up… The political speech carveout ended up setting the stage for how the company would handle not only Trump, but populist leaders around the world who have posted content that test these boundaries, such as Rodrigo Duterte in the Philippines, Jair Bolsonaro in Brazil and Narendra Modi in India…

“The value of being in favor with people in power outweighs almost every other concern for Facebook,” said David Thiel, a Facebook security engineer who resigned in March after his colleagues refused to remove a post he believed constituted “dehumanizing speech” by Brazil’s president.

Facebook Knows It Encourages Division

A Facebook team had a blunt message for senior executives. The company’s algorithms weren’t bringing people together. They were driving people apart. “Our algorithms exploit the human brain’s attraction to divisiveness,” read a slide from a 2018 presentation. “If left unchecked,” it warned, Facebook would feed users “more and more divisive content in an effort to gain user attention & increase time on the platform.” That presentation went to the heart of a question dogging Facebook almost since its founding: Does its platform aggravate polarization and tribal behavior? The answer it found, in some cases, was yes.

Facebook had kicked off an internal effort to understand how its platform shaped user behavior and how the company might address potential harms. Chief Executive Mark Zuckerberg had in public and private expressed concern about “sensationalism and polarization.” But in the end, Facebook’s interest was fleeting. Mr. Zuckerberg and other senior executives largely shelved the basic research, according to previously unreported internal documents and people familiar with the effort, and weakened or blocked efforts to apply its conclusions to Facebook products. Facebook policy chief Joel Kaplan, who played a central role in vetting proposed changes, argued at the time that efforts to make conversations on the platform more civil were “paternalistic,” said people familiar with his comments.

Why Don’t We Just Ban Targeted Advertising?

Google and Facebook, including their subsidiaries like Instagram and YouTube, make about 83 percent and 99 percent of their respective revenue from one thing: selling ads. It’s the same story with Twitter and other free sites and apps. More to the point, these companies are in the business of what’s called behavioral advertising, which allows companies to aim their marketing based on everything from users’ sexual orientations to their moods and menstrual cycles, as revealed by everything they do on their devices and every place they take them. It follows that most of the unsavory things the platforms do—boost inflammatory content, track our whereabouts, enable election manipulation, crush the news industry—stem from the goal of boosting ad revenues. Instead of trying to clean up all these messes one by one, the logic goes, why not just remove the underlying financial incentive? Targeting ads based on individual user data didn’t even really exist until the past decade. (Indeed, Google still makes many billions of dollars from ads tied to search terms, which aren’t user-specific.) What if companies simply weren’t allowed to do it anymore?

Let’s pretend it really happened. Imagine Congress passed a law tomorrow morning that banned companies from doing any ad microtargeting whatsoever. Close your eyes and picture what life would be like if the leading business model of the internet were banished from existence. How would things be different?

Many of the changes would be subtle. You could buy a pair of shoes on Amazon without Reebok ads following you for months. Perhaps you’d see some listings that you didn’t see before, for jobs or real estate. That’s especially likely if you’re African-American, or a woman, or a member of another disadvantaged group. You might come to understand that microtargeting had supercharged advertisers’ ability to discriminate, even when they weren’t trying to.

Facebook, Twitter Shutter Pro-Trump Network That Used AI To Create Fake People and Push Conspiracies

On Friday, Facebook and Twitter shut down a network of fake accounts that pushed pro-Trump messages all while “masquerading” as Americans with AI-generated faces as profile photos.

QIn a blog post, Facebook said that it connected the accounts to a US-based media company called The BL that, it claims, has ties to Epoch Media Group. In August, NBC News first reported that Epoch Media Group was pushing messages in support of President Donald Trump across social media platforms like Facebook and Twitter. Epoch has extensive connections to Falun Gong, an eccentric Chinese spiritual community that has faced significant persecution from the country’s central government. In a statement provided to The Verge, the Epoch Times denied any connection to The BL.

Facebook noted that many of the fake accounts used in the latest campaign employed false profile photos that appeared to have been generated by artificial intelligence. Those accounts would post BL content in other Facebook groups while pretending to be Americans. Pro-Trump messages were often posted “at very high frequencies” and linked to off-platform sites belonging to the BL and The Epoch Times. The accounts and pages were managed by individuals in the US and Vietnam. Facebook said that it removed 610 accounts, 89 Facebook pages, 156 groups, and 72 Instagram accounts that were connected to the organization. Around 55 million accounts followed one of these Facebook pages and 92,000 followed at least one of the Instagram accounts. The organization spent nearly $9.5 million in advertisements, according to Facebook.

Facebook, Google Donate Heavily To Privacy Advocacy Groups

Few companies have more riding on proposed privacy legislation than Alphabet’s Google and Facebook. To try to steer the bill their way, the giant advertising technology companies spend millions of dollars to lobby each year, a fact confirmed by government filings. Not so well-documented is spending to support highly influential think tanks and public interest groups that are helping shape the privacy debate, ostensibly as independent observers. Bloomberg Law examined seven prominent nonprofit think tanks that work on privacy issues that received a total of $1.5 million over a 18-month period ending Dec. 31, 2018. The groups included such organizations as the Center for Democracy and Technology, the Future of Privacy Forum and the Brookings Institution. The actual total is undoubtedly much higher — exact totals for contributions were difficult to pin down. The tech giants have “funded scores of nonprofits, including consumer and privacy groups, and academics,” said Jeffrey Chester, executive director at the Center for Digital Democracy, a public interest group that does not accept donations from Google or Facebook. Further, he says, their influence is strong. The companies have “opposed federal privacy laws and worked to weaken existing safeguards,” Chester said. Accepting donations from these “privacy-killing companies enable them to influence decisions by nonprofits, even subtly,” he said.

Facebook Says Government Demands For User Data Are at a Record High

Government demands for user data increased by 16% to 128,617 demands during the first-half of this year compared to the second-half of last year. That’s the highest number of government demands its received in any reporting period since it published its first transparency report in 2013. The U.S. government led the way with the most number of requests–50,741 demands for user data resulting in some account or user data given to authorities in 88% of cases. Facebook said two-thirds of all of the U.S. government’s requests came with a gag order, preventing the company from telling the user about the request for their data. But Facebook said it was able to release details of 11 so-called national security letters (NSLs) for the first time after their gag provisions were lifted during the period. National security letters can compel companies to turn over non-content data at the request of the FBI. These letters are not approved by a judge, and often come with a gag order preventing their disclosure. But since the Freedom Act passed in 2015, companies have been allowed to request the lifting of those gag orders.

Facebook Bug Has Camera Activated While People Are Using the App

When you’re scrolling through Facebook’s app, the social network could be watching you back, in more ways than just your data, concerned users have found. Multiple people have found and reported that their iPhone cameras were turned on in the background while looking at their feed. The issue came to light with several posts on Twitter, showing that their cameras were activated behind Facebook’s app as they were watching videos or looking at photos on the social network. After clicking on the video to full screen, returning it back to normal would create a bug where Facebook’s mobile layout was slightly shifted to the right. With the open space on the left, you could now see the phone’s camera activated in the background. This was documented in multiple cases, with the earliest incident on November 2.

Leaked Documents Show Facebook Leveraged User Data To Help Friends and Fight Rivals

A cache of leaked Facebook documents shows how the company’s CEO, Mark Zuckerberg, oversaw plans to consolidate the social network’s power and control competitors by treating its users’ data as a bargaining chip. This trove comprises approximately 7,000 pages in total, of which about 4,000 are internal Facebook communications such as emails, web chats, notes, presentations and spreadsheets, primarily from 2011 to 2015. About 1,200 pages are marked as “highly confidential.” Taken together, they show how Zuckerberg, along with his board and management team, found ways to tap Facebook users’ data — including information about friends, relationships and photos — as leverage over the companies it partnered with. In some cases, Facebook would reward partners by giving them preferential access to certain types of user data while denying the same access to rival companies.

For example, Facebook gave Amazon special access to user data because it was spending money on Facebook advertising. In another case the messaging app MessageMe was cut off from access to data because it had grown too popular and could compete with Facebook. All the while, Facebook planned to publicly frame these moves as a way to protect user privacy, the documents show. State and federal authorities are now closely scrutinizing Facebook’s business practices. In October, New York Attorney General Letitia James announced that 47 attorneys general from states and U.S. territories plan to take part in a New York-led antitrust probe into Facebook. Over the summer, the House Judiciary Committee held hearings over antitrust concerns in Silicon Valley while the Federal Trade Commission also continues to examine the firm’s practices.

I worked on political ads at Facebook. They profit by manipulating us.

Yael Eisenstat, a visiting fellow at Cornell Tech in the Digital Life Initiative and a former elections integrity head at Facebook, CIA officer, and White House adviser, writes for the Washington Post:

As the company continues to struggle with how to handle political content and as another presidential election approaches, it’s clear that tinkering around the margins of advertising policies won’t fix the most serious issues. The real problem is that Facebook profits partly by amplifying lies and selling dangerous targeting tools that allow political operatives to engage in a new level of information warfare. Its business model exploits our data to let advertisers custom-target people, show us each a different version of the truth and manipulate us with hyper-customized ads — ads that, as of two weeks ago, can contain blatantly false and debunked information if they’re run by a political campaign. As long as Facebook prioritizes profit over healthy discourse, they can’t avoid damaging democracies.

Early in my time there, I dug into the question of misinformation in political advertising. Posting in a “tribe” (Facebook’s internal collaboration platform), I asked our teams working on political advertising whether we should incorporate the same tools for political ads that other integrity teams at Facebook were developing to address misinformation in pages and organic posts. It was unclear to me why the company was applying different, siloed policies and tools across the platform. Most users do not differentiate organic content from ads — as I clearly saw on a trip to India, where we were testing our ads-integrity products — so why were we expecting users to understand that we applied different standards to different forms of content that all just appear in their news feeds?

The fact that we were taking money for political ads and allowing campaigns and other political organizations to target users based on the vast amounts of data we had gathered meant political ads should have an even higher bar for integrity than what people were posting in organic content. We verified advertisers to run political ads, giving them a check mark and a “paid for by” label, and I questioned if that gave the false impression that we were vouching for the validity of the content, boosting its perceived credibility even though we weren’t checking any facts or ensuring that ads weren’t spreading false information. Most of my colleagues agreed. People wanted to get this right. But above me, there was no appetite for my pushing, and I was accused of “creating confusion.”

Facebook CEO Mark Zuckerberg Says He Fears ‘Erosion of Truth’ But Defends Allowing Politicians To Lie in Ads

Facebook chief executive Mark Zuckerberg said in an interview he worries “about an erosion of truth” online but defended the policy that allows politicians to peddle ads containing misrepresentations and lies on his social network, a stance that has sparked an outcry during the 2020 presidential campaign. From a report:

“People worry, and I worry deeply, too, about an erosion of truth,” Zuckerberg told The Washington Post ahead of a speech Thursday at Georgetown University. “At the same time, I don’t think people want to live in a world where you can only say things that tech companies decide are 100 percent true. And I think that those tensions are something we have to live with.” Zuckerberg’s approach to political speech has come under fire in recent weeks. Democrats have taken particular issue with Facebook’s decision to allow an ad from President Trump’s 2020 campaign that included falsehoods about former vice president Joe Biden and his son, Hunter. Sen. Elizabeth Warren responded to Facebook’s decision by running her own campaign ad, satirically stating that Zuckerberg supports Trump for re-election.

Zuckerberg framed the issue as part of a broader debate over free expression, warning about the dangers of social networks, including Facebook, “potentially cracking down too much.” He called on the U.S. to set an example for tailored regulation in contrast to other countries, including China, that censor political speech online. And Zuckerberg stressed Facebook must stand strong against governments that seek to “pull back” on free speech in the face of heightened social and political tensions. Zuckerberg’s appearance in Washington marks his most forceful attempt to articulate his vision for how governments and tech giants should approach the Web’s most intractable problems. The scale of Facebook and its affiliated apps, Instagram and WhatsApp, which make up a virtual community of billions of users, poses challenges for Zuckerberg and regulators around the world as they struggle to contain hate speech, falsehoods, violent imagery and terrorist propaganda on social media.