I worked on political ads at Facebook. They profit by manipulating us.

Yael Eisenstat, a visiting fellow at Cornell Tech in the Digital Life Initiative and a former elections integrity head at Facebook, CIA officer, and White House adviser, writes for the Washington Post:

As the company continues to struggle with how to handle political content and as another presidential election approaches, it’s clear that tinkering around the margins of advertising policies won’t fix the most serious issues. The real problem is that Facebook profits partly by amplifying lies and selling dangerous targeting tools that allow political operatives to engage in a new level of information warfare. Its business model exploits our data to let advertisers custom-target people, show us each a different version of the truth and manipulate us with hyper-customized ads — ads that, as of two weeks ago, can contain blatantly false and debunked information if they’re run by a political campaign. As long as Facebook prioritizes profit over healthy discourse, they can’t avoid damaging democracies.

Early in my time there, I dug into the question of misinformation in political advertising. Posting in a “tribe” (Facebook’s internal collaboration platform), I asked our teams working on political advertising whether we should incorporate the same tools for political ads that other integrity teams at Facebook were developing to address misinformation in pages and organic posts. It was unclear to me why the company was applying different, siloed policies and tools across the platform. Most users do not differentiate organic content from ads — as I clearly saw on a trip to India, where we were testing our ads-integrity products — so why were we expecting users to understand that we applied different standards to different forms of content that all just appear in their news feeds?

The fact that we were taking money for political ads and allowing campaigns and other political organizations to target users based on the vast amounts of data we had gathered meant political ads should have an even higher bar for integrity than what people were posting in organic content. We verified advertisers to run political ads, giving them a check mark and a “paid for by” label, and I questioned if that gave the false impression that we were vouching for the validity of the content, boosting its perceived credibility even though we weren’t checking any facts or ensuring that ads weren’t spreading false information. Most of my colleagues agreed. People wanted to get this right. But above me, there was no appetite for my pushing, and I was accused of “creating confusion.”

623