Resources

“Influencers” Are Being Paid Big Sums To Pitch Products and Thrash Rivals on Instagram and YouTube

“Influencers” are being paid big sums to pitch products on Instagram and YouTube. If you’re trying to grow a product on social media, you either fork over cash or pay in another way. This is the murky world of influencing, reports Wired. Brands will pay influencers to position products on their desks, behind them, or anywhere else they can subtly appear on screen. Payouts increase if an influencer tags a brand in a post or includes a link, but silent endorsements are often preferred.

Marketers of literature, wellness, fashion, entertainment, and other wares are all hooked on influencers. As brands have warmed to social-media advertising, influencer marketing has grown into a multibillion-dollar industry. Unlike traditional television or print ads, influencers have dedicated niche followings who take their word as gospel.

There’s another plus: Many users don’t view influencers as paid endorsers or salespeople—even though a significant percentage are—but as trusted experts, friends, and “real” people. This perceived authenticity is part of why brands shell out so much cash in exchange for a brief appearance in your Instagram feed.

660

Blockchain-based elections would be a disaster for democracy

If you talk to experts on election security they’ll tell you that we’re nowhere close to being ready for online voting. “Mobile voting is a horrific idea,” said election security expert Joe Hall when I asked him about a West Virginia experiment with blockchain-based mobile voting back in August.

But on Tuesday, The New York Times published an opinion piece claiming the opposite.

“Building a workable, scalable, and inclusive online voting system is now possible, thanks to blockchain technologies,” writes Alex Tapscott, whom the Times describes as co-founder of the Blockchain Research Institute.

Tapscott is wrong—and dangerously so. Online voting would be a huge threat to the integrity of our elections—and to public faith in election outcomes.

Tapscott focuses on the idea that blockchain technology would allow people to vote anonymously while still being able to verify that their vote was included in the final total. Even assuming this is mathematically possible—and I think it probably is—this idea ignores the many, many ways that foreign governments could compromise an online vote without breaking the core cryptographic algorithms.

For example, foreign governments could hack into the computer systems that governments use to generate and distribute cryptographic credentials to voters. They could bribe election officials to supply them with copies of voters’ credentials. They could hack into the PCs or smartphones voters use to cast their votes. They could send voters phishing emails to trick them into revealing their voting credentials—or simply trick them into thinking they’ve cast a vote when they haven’t.

679

Most Americans say they can’t tell the difference between a social media bot and a human

A new study from Pew Research Center found that most Americans can’t tell social media bots from real humans, and most are convinced bots are bad. “Only 47 percent of Americans are somewhat confident they can identify social media bots from real humans,” reports The Verge. “In contrast, most Americans surveyed in a study about fake news were confident they could identify false stories.”

The Pew study is an uncommon look at what the average person thinks about these automated accounts that plague social media platforms. After surveying over 4,500 adults in the U.S., Pew found that most people actually don’t know much about bots. Two-thirds of Americans have at least heard of social media bots, but only 16 percent say they’ve heard a lot about them, while 34 percent say they’ve never heard of them at all. The knowledgeable tend to be younger, and men are more likely than women (by 22 percentage points) to say they’ve heard of bots. Since the survey results are self-reported, there’s a chance people are overstating or understating their knowledge of bots. Of those who have heard of bots, 80 percent say the accounts are used for bad purposes.

Regardless of whether a person is a Republican or Democrat or young or old, most think that bots are bad. And the more that a person knows about social media bots, the less supportive they are of bots being used for various purposes, like activists drawing attention to topics or a political party using bots to promote candidates.

637

Facebook Is Giving Advertisers Access To Your Shadow Contact Information

Kashmir Hill, reporting for Gizmodo:

Last week, I ran an ad on Facebook targeted at a computer science professor named Alan Mislove. Mislove studies how privacy works on social networks and had a theory that Facebook is letting advertisers reach users with contact information collected in surprising ways. I was helping him test the theory by targeting him in a way Facebook had previously told me wouldn’t work. I directed the ad to display to a Facebook account connected to the landline number for Alan Mislove’s office, a number Mislove has never provided to Facebook. He saw the ad within hours.

One of the many ways that ads get in front of your eyeballs on Facebook and Instagram is that the social networking giant lets an advertiser upload a list of phone numbers or email addresses it has on file; it will then put an ad in front of accounts associated with that contact information. A clothing retailer can put an ad for a dress in the Instagram feeds of women who have purchased from them before, a politician can place Facebook ads in front of anyone on his mailing list, or a casino can offer deals to the email addresses of people suspected of having a gambling addiction. Facebook calls this a “custom audience.” You might assume that you could go to your Facebook profile and look at your “contact and basic info” page to see what email addresses and phone numbers are associated with your account, and thus what advertisers can use to target you. But as is so often the case with this highly efficient data-miner posing as a way to keep in contact with your friends, it’s going about it in a less transparent and more invasive way.

… Giridhari Venkatadri, Piotr Sapiezynski, and Alan Mislove of Northeastern University, along with Elena Lucherini of Princeton University, did a series of tests that involved handing contact information over to Facebook for a group of test accounts in different ways and then seeing whether that information could be used by an advertiser. They came up with a novel way to detect whether that information became available to advertisers by looking at the stats provided by Facebook about the size of an audience after contact information is uploaded. They go into this in greater length and technical detail in their paper [PDF]. They found that when a user gives Facebook a phone number for two-factor authentication or in order to receive alerts about new log-ins to a user’s account, that phone number became targetable by an advertiser within a couple of weeks.

Officially, Facebook denies the existence of shadow profiles. In a hearing with the House Energy & Commerce Committee earlier this year, when New Mexico Representative Ben Lujan asked Facebook CEO Mark Zuckerberg if he was aware of the so-called practice of building “shadow profiles”, Zuckerberg denied knowledge of it.

628

Social Media Manipulation Rising Globally, New Oxford Report Warns

A new report from Oxford University found that manipulation of public opinion over social media platforms is growing at a large scale, despite efforts to combat it. “Around the world, government agencies and political parties are exploiting social media platforms to spread junk news and disinformation, exercise censorship and control, and undermine trust in media, public institutions and science.”

“The number of countries where formally organized social media manipulation occurs has greatly increased, from 28 to 48 countries globally,” says Samantha Bradshaw, co-author of the report. “The majority of growth comes from political parties who spread disinformation and junk news around election periods. There are more political parties learning from the strategies deployed during Brexit and the U.S. 2016 Presidential election: more campaigns are using bots, junk news, and disinformation to polarize and manipulate voters.”

This is despite efforts by governments in many democracies introducing new legislation designed to combat fake news on the internet. “The problem with this is that these ‘task forces’ to combat fake news are being used as a new tool to legitimize censorship in authoritarian regimes,” says Professor Phil Howard, co-author and lead researcher on the OII’s Computational Propaganda project. “At best, these types of task forces are creating counter-narratives and building tools for citizen awareness and fact-checking.” Another challenge is the evolution of the mediums individuals use to share news and information. “There is evidence that disinformation campaigns are moving on to chat applications and alternative platforms,” says Bradshaw. “This is becoming increasingly common in the Global South, where large public groups on chat applications are more popular.”

715

Facebook is not alone in making everyone’s data available for whatever purpose

Most companies that trade in the sale and manipulation of personal information are private and beholden to few rules other than the bare minimum of those they establish themselves, to avoid scrutiny and be able to say “we told you so” if an angry individual ever comes calling. Even if a consumer is aware their data is being passed around, their ability to control it once it’s out there is virtually nil: if they request it be deleted from one data broker, it can simply be bought back from from one of several gigantic firms that have been storing it, too.

It is an open question what the actual effect of Cambridge Analytica’s work on the presidential election was, and what the outcome might have been without its influence (most references to its “psychographic” profiling in The New York Times’ story are appropriately skeptical). It would be hard to say without a lot more cooperation from the company and Facebook itself. But the leak by one of its researchers is an incredibly rare glimpse into a fairly routine process in an industry that is so staggeringly enormous and influential, not just in politics but in our personal, day-to-day existence, that it’s difficult to believe that it is anything but a mistake. But it isn’t, and wasn’t, a mistake. It is how things happened and are still happening every day.

667

Facebook, Google, and Microsoft Use Design to Trick You Into Handing Over Your Data, New Report Warns

A study from the Norwegian Consumer Council dug into the underhanded tactics used by Microsoft, Facebook, and Google to collect user data. “The findings include privacy intrusive default settings, misleading wording, giving users an illusion of control, hiding away privacy-friendly choices, take-it-or-leave-it choices, and choice architectures where choosing the privacy friendly option requires more effort for the users,” states the report, which includes images and examples of confusing design choices and strangely worded statements involving the collection and use of personal data.

Google makes opting out of personalized ads more of a chore than it needs to be and uses multiple pages of text, unclear design language, and, as described by the report, “hidden defaults” to push users toward the company’s desired action. “If the user tried to turn the setting off, a popup window appeared explaining what happens if Ads Personalization is turned off, and asked users to reaffirm their choice,” the report explained. “There was no explanation about the possible benefits of turning off Ads Personalization, or negative sides of leaving it turned on.” Those who wish to completely avoid personalized ads must traverse multiple menus, making that “I agree” option seem like the lesser of two evils.

In Windows 10, if a user wants to opt out of “tailored experiences with diagnostic data,” they have to click a dimmed lightbulb, while the symbol for opting in is a brightly shining bulb, says the report.

Another example has to do with Facebook. The social media site makes the “Agree and continue” option much more appealing and less intimidating than the grey “Manage Data Settings” option. The report says the company-suggested option is the easiest to use. “This ‘easy road’ consisted of four clicks to get through the process, which entailed accepting personalized ads from third parties and the use of face recognition. In contrast, users who wanted to limit data collection and use had to go through 13 clicks.”

699

How the “Math Men” Overthrew the “Mad Men”

Once, Mad Men ruled advertising. They’ve now been eclipsed by Math Men — the engineers and data scientists whose province is machines, algorithms, pureed data, and artificial intelligence. Yet Math Men are beleaguered, as Mark Zuckerberg demonstrated when he humbled himself before Congress, in April. Math Men’s adoration of data — coupled with their truculence and an arrogant conviction that their ‘science’ is nearly flawless — has aroused government anger, much as Microsoft did two decades ago.

The power of Math Men is awesome. Google and Facebook each has a market value exceeding the combined value of the six largest advertising and marketing holding companies. Together, they claim six out of every ten dollars spent on digital advertising, and nine out of ten new digital ad dollars. They have become more dominant in what is estimated to be an up to two-trillion-dollar annual global advertising and marketing business. Facebook alone generates more ad dollars than all of America’s newspapers, and Google has twice the ad revenues of Facebook.

756

Why the Facebook ‘scandal’ impacts you more than you think

It’s not just the data you choose to share.

By now we all know the story: Facebook allowed apps on its social media platform which enabled a shady outfit called Cambridge Analytica to scrape the profiles of 87 million users, in order to serve up targeted ads to benefit the Trump election campaign in 2016.  More than 300,000 Australian users of Facebook were caught up in the data harvesting.

But serving up ads in a foreign election campaign is not the whole story.  Facebook, and other companies involved in data mining, are invading our privacy and harming us economically and socially, in ways that are only just starting to become clear.

And it’s not just the data you choose to share. The information you post is not the whole story.  It’s only the tip of the iceberg of data that Facebook has collected about you.

Every time you go online you leave a trail of digital breadcrumbs.  Facebook has been busily sweeping up those breadcrumbs, and using them to categorise and profile you.  Facebook obviously knows when you click on a Facebook ‘like’ button; but also, unless a web developer has gone out of their way to find tools to block them (as we have done for our Salinger Privacy blog), Facebook knows every time you simply look at a website that has a Facebook ‘like’ button somewhere on it.

So if you only post or ‘like’ stories about inspirational mountain climbers and funny cat videos, but also do things online that you don’t share with your family, friends or work colleagues (like looking at stories about abortion or dealing with infidelity, Googling how to manage anxiety or erectile dysfunction, whingeing about your employer in a chatroom, or spending hours reviewing dating profiles, gambling or shopping obsessively for shoes)  — Facebook has you pegged anyway.

Plus, Facebook obtains data from other sources which know about your offline purchases, to build an even richer picture of who you really are.  And of course, Facebook may have access to your address book, your location history, the contents of your private messages, and depending on your brand of phone, possibly even a history of your phone calls and text messages.

All that information is used to draw inferences and assumptions about your preferences, and predict your likely behaviour.  The results are then used to categorise, profile and ultimately target you, in a process usually described as ‘online behavioural advertising’.

It’s not ‘just ads’

The objective of online behavioural advertising is to predict your purchasing interests and drive a purchase decision.  So far, the same as any other advertising.  But online, the implications for us as individuals are much greater.

Facebook’s promise to advertisers is that it can show their ad to exactly who the advertiser wants, and exclude everybody else.

However, by allowing exclusion, the platform also allows discrimination.  Facebook has been caught allowing advertisers to target — and exclude — people on the basis of their ‘ethnic affinity’, amongst other social, demographic, racial and religious characteristics.  So a landlord with an ad for rental housing could prevent people profiled as ‘single mothers’ from ever seeing their ad.  An employer could prevent people identifying as Jewish from seeing a job ad.  A bank could prevent people categorised as African Americans from seeing an ad for a home loan.

Existing patterns of social exclusion, economic inequality and discrimination are further entrenched by micro-targeted advertising, which is hidden from public view and regulatory scrutiny.

Data boy. Mark Zuckerberg testifies in Washington. Image: Getty.

Predictive analytics can narrow or alter your life choices

Once we move beyond straight-up advertising and into predictive analytics, the impact on individual autonomy becomes more acute.  Big Data feeds machine learning, which finds patterns in the data, from which new rules (algorithms) are designed.  Algorithms predict how a person will behave, and suggest how they should be treated.

Algorithms can lead to price discrimination, like surge pricing based on Uber knowing how much phone battery life you have left.  Or market exclusion, like Woolworths only offering car insurance to customers it has decided are low risk, based on an assessment of the groceries they buy.

Banks have been predicting the risk of a borrower defaulting on a loan for decades, but now algorithms are also used to determine who to hire, predict when a customer is pregnant, and deliver targeted search results to influence how you vote.

Algorithms are also being used to predict the students at risk of failure, the prisoners at risk of re-offending, and who is at risk of suicide and then launching interventions accordingly.  However, even leaving aside the accuracy of those predictions, interventions are not necessarily well-intentioned.  It was revealed last year that Australian Facebook executives were touting to advertisers their ability to target psychologically vulnerable teenagers. 

Automated decision-making diminishes our autonomy, by narrowing or altering our market and life choices, in ways that are not clear to us.  People already in a position of economic or social disadvantage face the additional challenge of trying to disprove or beat an invisible algorithm.

In a predictive and pre-emptive world, empathy, forgiveness, rehabilitation, redemption, individual dignity, autonomy and free will are programmed out of our society.

Fiddling with users’ privacy settings on Facebook won’t fix anything.  If we want our lives to be ruled by human values and individual dignity, instead of by machines fed on questionable data, we need robust, enforced and globally effective privacy laws.

A new European privacy law commences later this month.  The obligations include that businesses and governments must offer understandable explanations of how their algorithms work, and allow people to seek human review of automated decision-making.  This is a step in the right direction, which Australia, the US and the rest of the world should follow.

828

YouTube, the Great Radicalizer

At one point during the 2016 presidential election campaign, I watched a bunch of videos of Donald Trump rallies on YouTube. I was writing an article about his appeal to his voter base and wanted to confirm a few quotations.

Soon I noticed something peculiar. YouTube started to recommend and “autoplay” videos for me that featured white supremacist rants, Holocaust denials and other disturbing content.

Since I was not in the habit of watching extreme right-wing fare on YouTube, I was curious whether this was an exclusively right-wing phenomenon. So I created another YouTube account and started watching videos of Hillary Clinton and Bernie Sanders, letting YouTube’s recommender algorithm take me wherever it would.

Before long, I was being directed to videos of a leftish conspiratorial cast, including arguments about the existence of secret government agencies and allegations that the United States government was behind the attacks of Sept. 11. As with the Trump videos, YouTube was recommending content that was more and more extreme than the mainstream political fare I had started with.

Intrigued, I experimented with nonpolitical topics. The same basic pattern emerged. Videos about vegetarianism led to videos about veganism. Videos about jogging led to videos about running ultramarathons.

It seems as if you are never “hard core” enough for YouTube’s recommendation algorithm. It promotes, recommends and disseminates videos in a manner that appears to constantly up the stakes. Given its billion or so users, YouTube may be one of the most powerful radicalizing instruments of the 21st century.

This is not because a cabal of YouTube engineers is plotting to drive the world off a cliff. A more likely explanation has to do with the nexus of artificial intelligence and Google’s business model. (YouTube is owned by Google.) For all its lofty rhetoric, Google is an advertising broker, selling our attention to companies that will pay for it. The longer people stay on YouTube, the more money Google makes.

What keeps people glued to YouTube? Its algorithm seems to have concluded that people are drawn to content that is more extreme than what they started with — or to incendiary content in general.

Is this suspicion correct? Good data is hard to come by; Google is loath to share information with independent researchers. But we now have the first inklings of confirmation, thanks in part to a former Google engineer named Guillaume Chaslot.

Mr. Chaslot worked on the recommender algorithm while at YouTube. He grew alarmed at the tactics used to increase the time people spent on the site. Google fired him in 2013, citing his job performance. He maintains the real reason was that he pushed too hard for changes in how the company handles such issues.

The Wall Street Journal conducted an investigation of YouTube content with the help of Mr. Chaslot. It found that YouTube often “fed far-right or far-left videos to users who watched relatively mainstream news sources,” and that such extremist tendencies were evident with a wide variety of material. If you searched for information on the flu vaccine, you were recommended anti-vaccination conspiracy videos.

It is also possible that YouTube’s recommender algorithm has a bias toward inflammatory content. In the run-up to the 2016 election, Mr. Chaslot created a program to keep track of YouTube’s most recommended videos as well as its patterns of recommendations. He discovered that whether you started with a pro-Clinton or pro-Trump video on YouTube, you were many times more likely to end up with a pro-Trump video recommended.

Combine this finding with other research showing that during the 2016 campaign, fake news, which tends toward the outrageous, included much more pro-Trump than pro-Clinton content, and YouTube’s tendency toward the incendiary seems evident.

YouTube has recently come under fire for recommending videos promoting the conspiracy theory that the outspoken survivors of the school shooting in Parkland, Fla., are “crisis actors” masquerading as victims. Jonathan Albright, a researcher at Columbia, recently “seeded” a YouTube account with a search for “crisis actor” and found that following the “up next” recommendations led to a network of some 9,000 videos promoting that and related conspiracy theories, including the claim that the 2012 school shooting in Newtown, Conn., was a hoax.

What we are witnessing is the computational exploitation of a natural human desire: to look “behind the curtain,” to dig deeper into something that engages us. As we click and click, we are carried along by the exciting sensation of uncovering more secrets and deeper truths. YouTube leads viewers down a rabbit hole of extremism, while Google racks up the ad sales.

Human beings have many natural tendencies that need to be vigilantly monitored in the context of modern life. For example, our craving for fat, salt and sugar, which served us well when food was scarce, can lead us astray in an environment in which fat, salt and sugar are all too plentiful and heavily marketed to us. So too our natural curiosity about the unknown can lead us astray on a website that leads us too much in the direction of lies, hoaxes and misinformation.

In effect, YouTube has created a restaurant that serves us increasingly sugary, fatty foods, loading up our plates as soon as we are finished with the last meal. Over time, our tastes adjust, and we seek even more sugary, fatty foods, which the restaurant dutifully provides. When confronted about this by the health department and concerned citizens, the restaurant managers reply that they are merely serving us what we want.

This situation is especially dangerous given how many people — especially young people — turn to YouTube for information. Google’s cheap and sturdy Chromebook laptops, which now make up more than 50 percent of the pre-college laptop education market in the United States, typically come loaded with ready access to YouTube.

This state of affairs is unacceptable but not inevitable. There is no reason to let a company make so much money while potentially helping to radicalize billions of people, reaping the financial benefits while asking society to bear so many of the costs.

841

Efforts grow to help students evaluate what they see online

Alarmed by the proliferation of false content online, state lawmakers [in the United States] are pushing schools to put more emphasis on teaching students how to tell fact from fiction.

Lawmakers in several states have introduced or passed bills calling on public school systems to do more to teach media literacy skills that they say are critical to democracy. The effort has been bipartisan but has received little attention despite successful legislation in Washington state, Connecticut, Rhode Island and New Mexico.

Advocates say the K-12 curriculum has not kept pace with rapid changes in technology. Studies show many children spend hours every day online but struggle to comprehend the content that comes at them.

For years, they have pushed schools to incorporate media literacy — including the ability to evaluate and analyze sources of information — into lesson plans in civics, language arts, science and other subjects.

716

YouTube, YouTubers and You

746

Facebook Really Wants You to Come Back

The social network is getting aggressive with people who don’t log in often, working to keep up its engagement numbers.

It’s been about a year since Rishi Gorantala deleted the Facebook app from his phone, and the company has only gotten more aggressive in its emails to win him back. The social network started out by alerting him every few days about friends that had posted photos or made comments—each time inviting him to click a link and view the activity on Facebook. He rarely did.

Then, about once a week in September, he started to get prompts from a Facebook security customer-service address. “It looks like you’re having trouble logging into Facebook,” the emails would say. “Just click the button below and we’ll log you in. If you weren’t trying to log in, let us know.” He wasn’t trying. But he doesn’t think anybody else was, either.

“The content of mail they send is essentially trying to trick you,” said Gorantala, 35, who lives in Chile. “Like someone tried to access my account so I should go and log in.”

775

Facebook should be ‘regulated like cigarette industry’, says tech CEO

Facebook should be regulated like a cigarette company, because of the addictive and harmful properties of social media, according to Salesforce chief executive Marc Benioff.

Last week, venture capitalist Roger McNamee – an early investor in Facebook – wrote a Guardian column warning that the company would would have to “address the harm the platform has caused through addiction and exploitation by bad actors”.

“I was once Mark Zuckerberg’s mentor, but I have not been able to speak to him about this. Unfortunately, all the internet platforms are deflecting criticism and leaving their users in peril,” McNamee wrote.

Earlier, Sean Parker, Facebook’s first President, had described the business practice of social media firms as “a social-validation feedback loop … exactly the kind of thing that a hacker like myself would come up with, because you’re exploiting a vulnerability in human psychology”. Parker now describes himself as “something of a conscientious objector” to social media.

As part of its attempt to win back control of the narrative, Facebook has announced it will begin taking into account how trusted a publisher is as part of its News Feed algorithm. The company’s metric for determining trust, however, is a simple two-question survey, causing some to query its potential.

700

An AI-Powered App Has Resulted in an Explosion of Convincing Face-Swap Porn

In December, Motherboard discovered a Redditor named ‘deepfakes’ quietly enjoying his hobby: Face-swapping celebrity faces onto porn performers’ bodies. He made several convincing porn videos of celebrities — including Gal Gadot, Maisie Williams, and Taylor Swift — using a machine learning algorithm, his home computer, publicly available videos, and some spare time. Since we first wrote about deepfakes, the practice of producing AI-assisted fake porn has exploded. More people are creating fake celebrity porn using machine learning, and the results have become increasingly convincing. A redditor even created an app specifically designed to allow users without a computer science background to create AI-assisted fake porn. All the tools one needs to make these videos are free, readily available, and accompanied with instructions that walk novices through the process.

An incredibly easy-to-use application for DIY fake videos—of sex and revenge porn, but also political speeches and whatever else you want—that moves and improves at this pace could have society-changing impacts in the ways we consume media. The combination of powerful, open-source neural network research, our rapidly eroding ability to discern truth from fake news, and the way we spread news through social media has set us up for serious consequences.

788

How Facebook’s Political Unit Enables the Dark Art of Digital Propaganda

Under fire for Facebook Inc.’s role as a platform for political propaganda, co-founder Mark Zuckerberg has punched back, saying his mission is above partisanship. “We hope to give all people a voice and create a platform for all ideas,” Zuckerberg wrote in September after President Donald Trump accused Facebook of bias. Zuckerberg’s social network is a politically agnostic tool for its more than 2 billion users, he has said. But Facebook, it turns out, is no bystander in global politics. What he hasn’t said is that his company actively works with political parties and leaders including those who use the platform to stifle opposition — sometimes with the aid of “troll armies” that spread misinformation and extremist ideologies.

The initiative is run by a little-known Facebook global government and politics team that’s neutral in that it works with nearly anyone seeking or securing power. The unit is led from Washington by Katie Harbath, a former Republican digital strategist who worked on former New York Mayor Rudy Giuliani’s 2008 presidential campaign. Since Facebook hired Harbath three years later, her team has traveled the globe helping political clients use the company’s powerful digital tools. In some of the world’s biggest democracies — from India and Brazil to Germany and the U.K. — the unit’s employees have become de facto campaign workers. And once a candidate is elected, the company in some instances goes on to train government employees or provide technical assistance for live streams at official state events.

857

“Are you happy now? The uncertain future of emotion analytics”

Elise Thomas writes at Hopes & Fears:

“Right now, in a handful of computing labs scattered across the world, new software is being developed which has the potential to completely change our relationship with technology. Affective computing is about creating technology which recognizes and responds to your emotions. Using webcams, microphones or biometric sensors, the software uses a person’s physical reactions to analyze their emotional state, generating data which can then be used to monitor, mimic or manipulate that person’s emotions.”

Corporations spend billions each year trying to build “authentic” emotional connections to their target audiences. Marketing research is one of the most prolific research fields around, conducting thousands of studies on how to more effectively manipulate consumers’ decision-making. Advertisers are extremely interested in affective computing and particularly in a branch known as emotion analytics, which offers unprecedented real-time access to consumers’ emotional reactions and the ability to program alternative responses depending on how the content is being received.

For example, if two people watch an advertisement with a joke and only one person laughs, the software can be programmed to show more of the same kind of advertising to the person who laughs while trying different sorts of advertising on the person who did not laugh to see if it’s more effective. In essence, affective computing could enable advertisers to create individually-tailored advertising en masse.”

“Say 15 years from now a particular brand of weight loss supplements obtains a particular girl’s information and locks on. When she scrolls through her Facebook, she sees pictures of rail-thin celebrities, carefully calibrated to capture her attention. When she turns on the TV, it automatically starts on an episode of “The Biggest Loser,” tracking her facial expressions to find the optimal moment for a supplement commercial. When she sets her music on shuffle, it “randomly” plays through a selection of the songs which make her sad. This goes on for weeks.

Now let’s add another layer. This girl is 14, and struggling with depression. She’s being bullied in school. Having become the target of a deliberate and persistent campaign by her technology to undermine her body image and sense of self-worth, she’s at risk of making some drastic choices.”

857

Adobe is working on ‘Photoshop for audio’ that will let you add words someone never said to recordings

“Adobe is working on a new piece of software that would act like a Photoshop for audio, according to Adobe developer Zeyu Jin, who spoke at the Adobe MAX conference in San Diego, California today. The software is codenamed Project VoCo, and it’s not clear at this time when it will materialize as a commercial product.

The standout feature, however, is the ability to add words not originally found in the audio file. Like Photoshop, Project VoCo is designed to be a state-of-the-art audio editing application. Beyond your standard speech editing and noise cancellation features, Project VoCo can also apparently generate new words using a speaker’s recorded voice. Essentially, the software can understand the makeup of a person’s voice and replicate it, so long as there’s about 20 minutes of recorded speech.

In Jin’s demo, the developer showcased how Project VoCo let him add a word to a sentence in a near-perfect replication of the speaker, according to Creative Bloq. So similar to how Photoshop ushered in a new era of editing and image creation, this tool could transform how audio engineers work with sound, polish clips, and clean up recordings and podcasts.”

“When recording voiceovers, dialog, and narration, people would often like to change or insert a word or a few words due to either a mistake they made or simply because they would like to change part of the narrative,” reads an official Adobe statement. “We have developed a technology called Project VoCo in which you can simply type in the word or words that you would like to change or insert into the voiceover. The algorithm does the rest and makes it sound like the original speaker said those words.”

Imagine this technology coupled with a false video manipulation component, that also already exists as a working proof. One really could make potentially convincing entirely unreal audio/video of a person’s likeness…

953