Resources

Facebook Will Pay Users To Log Off Before 2020 Election

Facebook is offering users money to refrain from using the site and Instagram in the weeks leading up to the bitterly contested November elections. The New York Post reports:
To assess the impact of social media on voting, the company will pay selected members up to $120 to deactivate their accounts beginning at the end of September. “Anyone who chooses to opt-in — whether it’s completing surveys or deactivating FB or IG for a period of time — will be compensated,” Facebook spokesperson Liz Bourgeois tweeted last week. “This is fairly standard for this type of academic research.” The Silicon Valley giant said it expects 200,000 to 400,000 people to take part.

“Representative, scientific samples of people in the US will be selected and invited to participate in the study. Some potential participants will see a notice in Facebook or Instagram inviting them to take part in the study,” Facebook said. “Study samples will be designed to ensure that participants mirror the diversity of the US adult population, as well as users of Facebook and Instagram.” The results of the study are expected to be released sometime next year.

A Whistleblower Says Facebook Ignored Global Political Manipulation

Facebook ignored or was slow to act on evidence that fake accounts on its platform have been undermining elections and political affairs around the world, according to an explosive memo sent by a recently fired Facebook employee and obtained by BuzzFeed News. From the report:
The 6,600-word memo, written by former Facebook data scientist Sophie Zhang, is filled with concrete examples of heads of government and political parties in Azerbaijan and Honduras using fake accounts or misrepresenting themselves to sway public opinion. In countries including India, Ukraine, Spain, Bolivia, and Ecuador she found evidence of coordinated campaigns of varying sizes to boost or hinder political candidates or outcomes, though she did not always conclude who was behind them. “In the three years I’ve spent at Facebook, I’ve found multiple blatant attempts by foreign national governments to abuse our platform on vast scales to mislead their own citizenry, and caused international news on multiple occasions,” wrote Zhang, who declined to talk to BuzzFeed News. Her Linkedin profile said she “worked as the data scientist for the Facebook Site Integrity fake engagement team” and dealt with “bots influencing elections and the like.”

“I have personally made decisions that affected national presidents without oversight, and taken action to enforce against so many prominent politicians globally that I’ve lost count,” she wrote. The memo is a damning account of Facebook’s failures. It’s the story of Facebook abdicating responsibility for malign activities on its platform that could affect the political fate of nations outside the United States or Western Europe. It’s also the story of a junior employee wielding extraordinary moderation powers that affected millions of people without any real institutional support, and the personal torment that follow

Long Before Cambridge Analytica, Simulmatics Linked Data and Politics

NPR reporter Shannon Bond reports of a little-known — and now nearly entirely forgotten — company called Simulmatics, which had technology that used vast amounts of data to profile voters and ultimately help John F. Kennedy win the 1960 election. From the report:
The […] company was called Simulmatics, the subject of Harvard historian and New Yorker writer Jill Lepore’s timely new book, If Then: How the Simulmatics Corporation Invented the Future. Before Cambridge Analytica, before Facebook, before the Internet, there was Simulmatics’ “People Machine,” in Lepore’s telling: “A computer program designed to predict and manipulate human behavior, all sorts of human behavior, from buying a dishwasher to countering an insurgency to casting a vote.”

Lepore unearths Simulmatics’ story and makes the argument that, amid a broader proliferation of behavioral science research across academia and government in the 1960s, the company paved the way for our 21st-century obsession with data and prediction. Simulmatics, she argues, is “a missing link in the history of technology,” the antecedent to Facebook, Google and Amazon and to algorithms that attempt to forecast who will commit crimes or get good grades. “It lurks behind the screen of every device,” she writes.

If Then presents Simulmatics as both ahead of its time and, more often than not, overpromising and under-delivering. The company was the brainchild of Ed Greenfield, an advertising executive straight out of Mad Men, who believed computers could help Democrats recapture the White House. He wanted to create a model of the voting population that could tell you how voters would respond to whatever a candidate did or said. The name Simulmatics was a contraction of “simulation” and “automation.” As Greenfield explained it to investors, Lepore writes: “The Company proposes to engage principally in estimating probable human behavior by the use of computer technology.” The People Machine was originally built to analyze huge amounts of data ahead of the 1960 election, in what Lepore describes as, at the time, “the largest political science research project in American history.”

From Climate Change to the Dangers of Smoking: How Powerful Interests Made Us Doubt Everything

BBC News reports:
In 1991, the trade body that represents electrical companies in the U.S., the Edison Electric Institute, created a campaign called the Information Council for the Environment which aimed to “Reposition global warming as theory (not fact)”. Some details of the campaign were leaked to the New York Times. “They ran advertising campaigns designed to undermine public support, cherry picking the data to say, ‘Well if the world is warming up, why is Kentucky getting colder?’ They asked rhetorical questions designed to create confusion, to create doubt,” argued Naomi Oreskes, professor of the history of science at Harvard University and co-author of Merchants of Doubt. But back in the 1990 there were many campaigns like this…

Most of the organisations opposing or denying climate change science were right-wing think tanks, who tended to be passionately anti-regulation. These groups made convenient allies for the oil industry, as they would argue against action on climate change on ideological grounds. Jerry Taylor spent 23 years with the Cato Institute — one of those right wing think tanks — latterly as vice president. Before he left in 2014, he would regularly appear on TV and radio, insisting that the science of climate change was uncertain and there was no need to act.

Now, he realises his arguments were based on a misinterpretation of the science, and he regrets the impact he’s had on the debate.

Harvard historian Naomi Oreskes discovered leading climate-change skeptics had also been prominent skeptics on the dangers of cigarette smoking. “That was a Eureka moment,” Oreskes tells BBC News. “We realised this was not a scientific debate.”

Decades before the energy industry tried to undermine the case for climate change, tobacco companies had used the same techniques to challenge the emerging links between smoking and lung cancer in the 1950s… As a later document by tobacco company Brown and Williamson summarised the approach: “Doubt is our product, since it is the best means of competing with the ‘body of fact’ that exists in the minds of the general public.” Naomi Oreskes says this understanding of the power of doubt is vital. “They realise they can’t win this battle by making a false claim that sooner or later would be exposed. But if they can create doubt, that would be sufficient — because if people are confused about the issue, there’s a good chance they’ll just keep smoking….”

Academics like David Michaels, author of The Triumph of Doubt, fear the use of uncertainty in the past to confuse the public and undermine science has contributed to a dangerous erosion of trust in facts and experts across the globe today, far beyond climate science or the dangers of tobacco. He cites public attitudes to modern issues like the safety of 5G, vaccinations — and coronavirus.

“By cynically manipulating and distorting scientific evidence, the manufacturers of doubt have seeded in much of the public a cynicism about science, making it far more difficult to convince people that science provides useful — in some cases, vitally important — information.

Google Search and Dark Patterns

Previously, the search engine had marked paid results with the word “Ad” in a green box, tucked beneath the headline next to a matching green display URL. Now, all of a sudden, the “Ad” and the URL shifted above the headline, and both were rendered in discreet black; the box disappeared. The organic search results underwent a similar makeover, only with a new favicon next to the URL instead of the word “Ad.” The result was a general smoothing: Ads looked like not-ads. Not-ads looked like ads. This was not Google’s first time fiddling with the search results interface. In fact, it had done so quite regularly over the last 13 years, as handily laid out in a timeline from the news site Search Engine Land. Each iteration whittled away the distinction between paid and unpaid content that much more. Most changes went relatively unnoticed, internet residents accepting the creep like the apocryphal frog in a slowly boiling pot.

But in January, amid rising antitrust drumbeats and general exhaustion with Big Tech, people noticed. Interface designers, marketers, and Google users alike decried the change, saying it made paid results practically indistinguishable from those that Google’s search algorithm served up organically. The phrase that came up most often: “dark pattern,” a blanket term coined by UX specialist Harry Brignull to describe manipulative design elements that benefit companies over their users. That a small design tweak could inspire so much backlash speaks to the profound influence Google and other ubiquitous platforms have — and the responsibility that status confers to them. “Google and Facebook shape realities,” says Kat Zhou, a product designer who has created a framework and toolkit to help promote ethical design. “Students and professors turn to Google for their research. Folks turn to Facebook for political news. Communities turn to Google for Covid-19 updates. In some sense, Google and Facebook have become arbiters of the truth. That’s particularly scary when you factor in their business models, which often incentivize blurring the line between news and advertisements.”

Google’s not the only search engine to blur this line. If anything, Bing is even more opaque, sneaking the “Ad” disclosure under the header, with only a faint outline to draw attention. […] But Google has around 92 percent of global search marketshare. It effectively is online search. Dark patterns are all too common online in general, and January wasn’t the first time people accused Google of deploying them. In June of 2018, a blistering report from the Norwegian Consumer Council found that Google and Facebook both used specific interface choices to strip away user privacy at almost every turn. The study details how both platforms implemented the least privacy-friendly options by default, consistently “nudged” users toward giving away more of their data, and more. It paints a portrait of a system designed to befuddle users into complacency. […] That confusion reached its apex a few months later, when an Associated Press investigation found that disabling Location History on your smartphone did not, in fact, stop Google from collecting your location in all instances.

Chinese Newspaper Touts Videogame Where Players ‘Hunt Down Traitors’ in Hong Kong

An online game calling on players to hunt down traitors who seek to separate Hong Kong from China and fuel street violence has reportedly begun to attract players across Chinese mainland social media platforms. The game, “fight the traitors together,” is set against the backdrop of the social unrest that has persisted in Hong Kong. The script asks the player to find eight secessionists hidden in the crowd participating in Hong Kong protests.

Players can knock them down with slaps or rotten eggs until they are captured. Online gamers claim the game allows them to vent their anger at the separatist behavior of secessionists during the recent Hong Kong riots. The eight traitors in the game, caricatured based on real people, include Jimmy Lai Chee-ying, Martin Lee Chu-ming and Joshua Wong Chi-fung, prominent opposition figures who have played a major role in inciting unrest in Hong Kong. There are also traitor figures in ancient China…

In the game, amid a crowd of black-clad rioters wearing yellow hats and face masks, Anson Chan Fang On-sang, another leading opposition figure, carries a bag with a U.S. flag, clutches a stack of U.S. dollars and holds a loudspeaker to incite violence in the streets.

Study of Over 11,000 Online Stores Finds ‘Dark Patterns’ on 1,254 sites

A large-scale academic study that analyzed more than 53,000 product pages on more than 11,000 online stores found widespread use of user interface “dark patterns” — practices meant to mislead customers into making purchases based on false or misleading information.

The study — presented last week at the ACM CSCW 2019 conference — found 1,818 instances of dark patterns present on 1,254 of the ~11K shopping websites (~11.1%) researchers scanned. “Shopping websites that were more popular, according to Alexa rankings, were more likely to feature dark patterns,” researchers said. But while the vast majority of UI dark patterns were meant to trick users into subscribing to newsletters or allowing broad data collection, some dark patterns were downright foul, trying to mislead users into making additional purchases, either by sneaking products into shopping carts or tricking users into believing products were about to sell out. Of these, the research team found 234 instances, deployed across 183 websites.

Leaked Documents Show Facebook Leveraged User Data To Help Friends and Fight Rivals

A cache of leaked Facebook documents shows how the company’s CEO, Mark Zuckerberg, oversaw plans to consolidate the social network’s power and control competitors by treating its users’ data as a bargaining chip. This trove comprises approximately 7,000 pages in total, of which about 4,000 are internal Facebook communications such as emails, web chats, notes, presentations and spreadsheets, primarily from 2011 to 2015. About 1,200 pages are marked as “highly confidential.” Taken together, they show how Zuckerberg, along with his board and management team, found ways to tap Facebook users’ data — including information about friends, relationships and photos — as leverage over the companies it partnered with. In some cases, Facebook would reward partners by giving them preferential access to certain types of user data while denying the same access to rival companies.

For example, Facebook gave Amazon special access to user data because it was spending money on Facebook advertising. In another case the messaging app MessageMe was cut off from access to data because it had grown too popular and could compete with Facebook. All the while, Facebook planned to publicly frame these moves as a way to protect user privacy, the documents show. State and federal authorities are now closely scrutinizing Facebook’s business practices. In October, New York Attorney General Letitia James announced that 47 attorneys general from states and U.S. territories plan to take part in a New York-led antitrust probe into Facebook. Over the summer, the House Judiciary Committee held hearings over antitrust concerns in Silicon Valley while the Federal Trade Commission also continues to examine the firm’s practices.

Mozilla Hits Google, Facebook For ‘Microtargeting’ Political Ads

Microtargeting, a method which uses consumer data and demographics to narrowly segment audiences, is used by political campaigns to specialize ads for different voting groups. The practice’s critics include Federal Election Commission Chairwoman Ellen Weintraub, who wrote in a Washington Post op-ed that microtargeting makes it “easy to single out susceptible groups and direct political misinformation to them with little accountability, because the public at large never sees the ad.” Mozilla’s call follows reports that Facebook has considered restricting politicians’ access to microtargeting.

I worked on political ads at Facebook. They profit by manipulating us.

Yael Eisenstat, a visiting fellow at Cornell Tech in the Digital Life Initiative and a former elections integrity head at Facebook, CIA officer, and White House adviser, writes for the Washington Post:

As the company continues to struggle with how to handle political content and as another presidential election approaches, it’s clear that tinkering around the margins of advertising policies won’t fix the most serious issues. The real problem is that Facebook profits partly by amplifying lies and selling dangerous targeting tools that allow political operatives to engage in a new level of information warfare. Its business model exploits our data to let advertisers custom-target people, show us each a different version of the truth and manipulate us with hyper-customized ads — ads that, as of two weeks ago, can contain blatantly false and debunked information if they’re run by a political campaign. As long as Facebook prioritizes profit over healthy discourse, they can’t avoid damaging democracies.

Early in my time there, I dug into the question of misinformation in political advertising. Posting in a “tribe” (Facebook’s internal collaboration platform), I asked our teams working on political advertising whether we should incorporate the same tools for political ads that other integrity teams at Facebook were developing to address misinformation in pages and organic posts. It was unclear to me why the company was applying different, siloed policies and tools across the platform. Most users do not differentiate organic content from ads — as I clearly saw on a trip to India, where we were testing our ads-integrity products — so why were we expecting users to understand that we applied different standards to different forms of content that all just appear in their news feeds?

The fact that we were taking money for political ads and allowing campaigns and other political organizations to target users based on the vast amounts of data we had gathered meant political ads should have an even higher bar for integrity than what people were posting in organic content. We verified advertisers to run political ads, giving them a check mark and a “paid for by” label, and I questioned if that gave the false impression that we were vouching for the validity of the content, boosting its perceived credibility even though we weren’t checking any facts or ensuring that ads weren’t spreading false information. Most of my colleagues agreed. People wanted to get this right. But above me, there was no appetite for my pushing, and I was accused of “creating confusion.”

Would You Trust Amazon To Run Free and Fair Elections?

While it does not handle voting on election day, AWS — along with a broad network of partners — now runs state and county election websites, stores voter registration rolls and ballot data, facilitates overseas voting by military personnel and helps provide live election-night results, according to company documents and interviews… Amazon pitches itself as a low-cost provider of secure election technology at a time when local officials and political campaigns are under intense pressure to prevent a repeat of 2016 presidential elections, which saw cyber-attacks on voting systems and election infrastructure.

Most security experts Reuters spoke to said that while Amazon’s cloud is likely much harder to hack than systems it is replacing, putting data from many jurisdictions on a single system raises the prospect that a single major breach could prove damaging. “It makes Amazon a bigger target” for hackers, “and also increases the challenge of dealing with an insider attack,” said Chris Vickery, director of cyber risk research at cybersecurity startup Upguard. A recent hack into Capital One Financial Corp’s data stored on Amazon’s cloud service was perpetrated by a former Amazon employee. The breach affected more than 100 million customers, underscoring how rogue employees or untrained workers can create security risks even if the underlying systems are secure…

Vickery uncovered at least three instances where voter data on Amazon’s cloud servers was exposed to the internet, which have been reported previously. For example, in 2017, he found a Republican contractor’s database for nearly every registered American voter hosted on AWS exposed on the internet for 12 days. In 2016, he found Mexico’s entire voter database on AWS servers was leaked. Amazon said the breaches were caused by customer errors, adding that while AWS secures the cloud infrastructure, customers are responsible for security of what goes in the cloud.

Cambridge Analytica Whistleblower: US Heading In ‘Same Direction As China’ With Online Privacy

“The United States is walking in the same direction as China, we’re just allowing private companies to monetize left, right and center,” Cambridge Analytica whistleblower Christopher Wylie told CNBC on Wednesday. “Just because it’s not the state doesn’t mean that there isn’t harmful impacts that could come if you have one or two large companies monitoring or tracking everything you do,” he said. CNBC reports:

Wylie, whose memoir came out this week, has become outspoken about the influence of social media companies due to the large amounts of data they collect. In March 2018, he exposed the Cambridge Analytica scandal that brought down his former employer and resulted in the Federal Trade Commission fining Facebook, 15 months later, $5 billion for mishandling. While Cambridge Analytica has since shut down, Wylie said the tactics it used could be deployed elsewhere, and that is why data privacy regulation needs to be dramatically enhanced.

“Even if the company has dissolved, the capabilities of the company haven’t,” he said. “My real concern is what happens if China becomes the next Cambridge Analytica, what happens if North Korea becomes the next Cambridge Analytica?” Wylie also said he believes that social media companies should, at a minimum, face regulation similar to water utilities or electrical companies — “certain industries that have become so important because of their vital importance to business and people’s lives and the nature of their scale.” In those cases, “we put in place rules that put consumers first,” he added. “You can still make a profit. You can still make money. But you have to consider the rights and safety of people.”

Twitter Executive Is Also A British Army ‘Psyops’ Solider

“The senior Twitter executive with editorial responsibility for the Middle East is also a part-time officer in the British Army’s psychological warfare unit,” reports Middle East Eye:
The 77th Brigade uses social media platforms such as Twitter, Instagram and Facebook, as well as podcasts, data analysis and audience research to wage what the head of the UK military, General Nick Carter, describes as “information warfare”. Carter says the 77th Brigade is giving the British military “the capability to compete in the war of narratives at the tactical level”; to shape perceptions of conflict. Some soldiers who have served with the unit say they have been engaged in operations intended to change the behaviour of target audiences.

What exactly MacMillan is doing with the unit is difficult to determine, however: he has declined to answer any questions about his role, as has Twitter and the UK’s Ministry of Defence (MoD).

Responding to the article, the British army told Newsweek their brigade had no relationship with Twitter, “other than using Twitter as one of many social media platforms for engagement and communication.” And Twitter reminded Newsweek that “We proactively publish all tweets and accounts relating to state-backed foreign information operations on the service — regardless of the source. We built this industry-leading archive to promote better public understanding of these threats.”
Despite the assertions of Twitter and the British military, academics and researchers have said the division between the two is not so clear. David Miller — a professor of political sociology in the School for Policy Studies at England’s University of Bristol, who studies propaganda and public relations efforts concerning the British government — is one such academic. He told Newsweek he believes a link is there, adding that it was a “threat to our democracy.”

“I would say I know a good amount about army propaganda and ‘psyops’ operations as they’re called, but what is interesting is how little information we have 77th Brigade,” he said.” I suppose it means that all their work is covert, but what I would like to know is what they exactly are they doing? Are they just tracking accounts or are they trying to influence people’s views? What we do know is that their account itself is hidden by Twitter so we assume they are trying to influence people under the operatives’ own names. And because we know so little about exactly what they’re doing, we have to look elsewhere for clues for example.

The professor also argues that Twitter is deceiving us by “not acting as transparently as it could. If they are working with army personnel in this way, it is extremely damaging to our democracy. Given Twitter’s closure of accounts alleged to be used by foreign governments, it’s a very hypocritical stance of Twitter to take.”

Researchers Easily Breached Voting Machines For the 2020 Election

The voting machines that the U.S. will use in the 2020 election are still vulnerable to hacks. A group of ethical hackers tested a bunch of those voting machines and election systems (most of which they bought on eBay). They were able to crack into every machine, The Washington Post reports. Their tests took place this summer at a Def Con cybersecurity conference, but the group visited Washington to share their findings yesterday. A number of flaws allowed the hackers to access the machines, including weak default passwords and shoddy encryption. The group says the machines could be hacked by anyone with access to them, and if poll workers make mistakes or take shortcuts, the machines could be infiltrated by remote hackers.

New Deepfake Algorithm Allows You To Text-Edit the Words of a Speaker In a Video

It is now possible to take a talking-head style video, and add, delete or edit the speaker’s words as simply as you’d edit text in a word processor. A new deepfake algorithm can process the audio and video into a new file in which the speaker says more or less whatever you want them to. New Atlas reports:

It’s the work of a collaborative team from Stanford University, Max Planck Institute for Informatics, Princeton University and Adobe Research, who say that in a perfect world the technology would be used to cut down on expensive re-shoots when an actor gets something wrong, or a script needs to be changed. In order to learn the face movements of a speaker, the algorithm requires about 40 minutes of training video, and a transcript of what’s being said, so it’s not something that can be thrown onto a short video snippet and run if you want good results. That 40 minutes of video gives the algorithm the chance to work out exactly what face shapes the subject is making for each phonetic syllable in the original script.

From there, once you edit the script, the algorithm can then create a 3D model of the face making the new shapes required. And from there, a machine learning technique called Neural Rendering can paint the 3D model over with photo-realistic textures to make it look basically indistinguishable from the real thing. Other software such as VoCo can be used if you wish to generate the speaker’s audio as well as video, and it takes the same approach, by breaking down a heap of training audio into phonemes and then using that dataset to generate new words in a familiar voice.

Mark Zuckerberg Leveraged Facebook User Data To Fight Rivals and Help Friends, Leaked Documents Show

Facebook CEO Mark Zuckerberg once considered making deals with third-party developers just to help him find out how much users’ data is worth, NBC News reported on Tuesday. The report, which cites 4,000 leaked pages of internal documents, shines a light on the way senior company executives viewed attaching a dollar sign to sensitive user data, despite Facebook’s public commitment to protect such information. From the report:

In the same week, Zuckerberg floated the idea of pursuing 100 deals with developers “as a path to figuring out the real market value” of Facebook user data and then “setting a public rate” for developers. “The goal here wouldn’t be the deals themselves, but that through the process of negotiating with them we’d learn what developers would actually pay (which might be different from what they’d say if we just asked them about the value), and then we’d be better informed on our path to set a public rate,” Zuckerberg wrote in a chat. Facebook told NBC News that it was exploring ways to build a sustainable business, but ultimately decided not to go forward with these plans.

Zuckerberg was unfazed by the potential privacy risks associated with Facebook’s data-sharing arrangements. “I’m generally skeptical that there is as much data leak strategic risk as you think,” he wrote in the email to Lessin. “I think we leak info to developers but I just can’t think of any instances where that data has leaked from developer to developer and caused a real issue for us.”

The report also outlines how PR people at Facebook tries to spin things. An excerpt: In a March 2014 email discussing Zuckerberg’s keynote speech at the event, where he was due to announce the removal of developers’ access to friends’ data, Jonny Thaw, a director of communications, wrote that it “may be a tough message for some developers as it may inhibit their growth.” “So one idea that came up today was potentially talking in the keynote about some of the trust changes we’re making on Facebook itself. So the message would be: ‘trust is really important to us — on Facebook, we’re doing A, B and C to help people control and understand what they’re sharing — and with platform apps we’re doing D, E and F.'” If that doesn’t work, he added, “we could announce some of Facebook’s trust initiatives in the run up to F8” to make the changes for developers “seem more natural.”

“Influencers” Are Being Paid Big Sums To Pitch Products and Thrash Rivals on Instagram and YouTube

“Influencers” are being paid big sums to pitch products on Instagram and YouTube. If you’re trying to grow a product on social media, you either fork over cash or pay in another way. This is the murky world of influencing, reports Wired. Brands will pay influencers to position products on their desks, behind them, or anywhere else they can subtly appear on screen. Payouts increase if an influencer tags a brand in a post or includes a link, but silent endorsements are often preferred.

Marketers of literature, wellness, fashion, entertainment, and other wares are all hooked on influencers. As brands have warmed to social-media advertising, influencer marketing has grown into a multibillion-dollar industry. Unlike traditional television or print ads, influencers have dedicated niche followings who take their word as gospel.

There’s another plus: Many users don’t view influencers as paid endorsers or salespeople—even though a significant percentage are—but as trusted experts, friends, and “real” people. This perceived authenticity is part of why brands shell out so much cash in exchange for a brief appearance in your Instagram feed.

Blockchain-based elections would be a disaster for democracy

If you talk to experts on election security they’ll tell you that we’re nowhere close to being ready for online voting. “Mobile voting is a horrific idea,” said election security expert Joe Hall when I asked him about a West Virginia experiment with blockchain-based mobile voting back in August.

But on Tuesday, The New York Times published an opinion piece claiming the opposite.

“Building a workable, scalable, and inclusive online voting system is now possible, thanks to blockchain technologies,” writes Alex Tapscott, whom the Times describes as co-founder of the Blockchain Research Institute.

Tapscott is wrong—and dangerously so. Online voting would be a huge threat to the integrity of our elections—and to public faith in election outcomes.

Tapscott focuses on the idea that blockchain technology would allow people to vote anonymously while still being able to verify that their vote was included in the final total. Even assuming this is mathematically possible—and I think it probably is—this idea ignores the many, many ways that foreign governments could compromise an online vote without breaking the core cryptographic algorithms.

For example, foreign governments could hack into the computer systems that governments use to generate and distribute cryptographic credentials to voters. They could bribe election officials to supply them with copies of voters’ credentials. They could hack into the PCs or smartphones voters use to cast their votes. They could send voters phishing emails to trick them into revealing their voting credentials—or simply trick them into thinking they’ve cast a vote when they haven’t.

Most Americans say they can’t tell the difference between a social media bot and a human

A new study from Pew Research Center found that most Americans can’t tell social media bots from real humans, and most are convinced bots are bad. “Only 47 percent of Americans are somewhat confident they can identify social media bots from real humans,” reports The Verge. “In contrast, most Americans surveyed in a study about fake news were confident they could identify false stories.”

The Pew study is an uncommon look at what the average person thinks about these automated accounts that plague social media platforms. After surveying over 4,500 adults in the U.S., Pew found that most people actually don’t know much about bots. Two-thirds of Americans have at least heard of social media bots, but only 16 percent say they’ve heard a lot about them, while 34 percent say they’ve never heard of them at all. The knowledgeable tend to be younger, and men are more likely than women (by 22 percentage points) to say they’ve heard of bots. Since the survey results are self-reported, there’s a chance people are overstating or understating their knowledge of bots. Of those who have heard of bots, 80 percent say the accounts are used for bad purposes.

Regardless of whether a person is a Republican or Democrat or young or old, most think that bots are bad. And the more that a person knows about social media bots, the less supportive they are of bots being used for various purposes, like activists drawing attention to topics or a political party using bots to promote candidates.

Facebook Is Giving Advertisers Access To Your Shadow Contact Information

Kashmir Hill, reporting for Gizmodo:

Last week, I ran an ad on Facebook targeted at a computer science professor named Alan Mislove. Mislove studies how privacy works on social networks and had a theory that Facebook is letting advertisers reach users with contact information collected in surprising ways. I was helping him test the theory by targeting him in a way Facebook had previously told me wouldn’t work. I directed the ad to display to a Facebook account connected to the landline number for Alan Mislove’s office, a number Mislove has never provided to Facebook. He saw the ad within hours.

One of the many ways that ads get in front of your eyeballs on Facebook and Instagram is that the social networking giant lets an advertiser upload a list of phone numbers or email addresses it has on file; it will then put an ad in front of accounts associated with that contact information. A clothing retailer can put an ad for a dress in the Instagram feeds of women who have purchased from them before, a politician can place Facebook ads in front of anyone on his mailing list, or a casino can offer deals to the email addresses of people suspected of having a gambling addiction. Facebook calls this a “custom audience.” You might assume that you could go to your Facebook profile and look at your “contact and basic info” page to see what email addresses and phone numbers are associated with your account, and thus what advertisers can use to target you. But as is so often the case with this highly efficient data-miner posing as a way to keep in contact with your friends, it’s going about it in a less transparent and more invasive way.

… Giridhari Venkatadri, Piotr Sapiezynski, and Alan Mislove of Northeastern University, along with Elena Lucherini of Princeton University, did a series of tests that involved handing contact information over to Facebook for a group of test accounts in different ways and then seeing whether that information could be used by an advertiser. They came up with a novel way to detect whether that information became available to advertisers by looking at the stats provided by Facebook about the size of an audience after contact information is uploaded. They go into this in greater length and technical detail in their paper [PDF]. They found that when a user gives Facebook a phone number for two-factor authentication or in order to receive alerts about new log-ins to a user’s account, that phone number became targetable by an advertiser within a couple of weeks.

Officially, Facebook denies the existence of shadow profiles. In a hearing with the House Energy & Commerce Committee earlier this year, when New Mexico Representative Ben Lujan asked Facebook CEO Mark Zuckerberg if he was aware of the so-called practice of building “shadow profiles”, Zuckerberg denied knowledge of it.