Resources

‘Deepfakes’ of Celebrities Have Begun Appearing in Ads, With or Without Their Permission

Celebrity deepfakes are coming to advertising. Among the recent entries: Last year, Russian telecommunications company MegaFon released a commercial in which a simulacrum of Hollywood legend Bruce Willis helps defuse a bomb. Just last week, Elon Musk seemed to star in a marketing video from real-estate investment startup reAlpha Tech. And last month a promotional video for machine-learning firm Paperspace showed talking semblances of the actors Tom Cruise and Leonardo DiCaprio. None of these celebrities ever spent a moment filming these campaigns. In the cases of Messrs. Musk, Cruise and DiCaprio, they never even agreed to endorse the companies in question. All the videos of digital simulations were created with so-called deepfake technology, which uses computer-generated renditions to make the Hollywood and business notables say and do things they never actually said or did.

Some of the ads are broad parodies, and the meshing of the digital to the analog in the best of cases might not fool an alert viewer. Even so, the growing adoption of deepfake software could eventually shape the industry in profound ways while creating new legal and ethical questions, experts said. Authorized deepfakes could allow marketers to feature huge stars in ads without requiring them to actually appear on-set or before cameras, bringing down costs and opening new creative possibilities. But unauthorized, they create a legal gray area: Celebrities could struggle to contain a proliferation of unauthorized digital reproductions of themselves and the manipulation of their brand and reputation, experts said.

144

Who owns the rights to your face?

When we create our social media accounts, we agree to grant those platforms a free license to use our content as they wish.

It was my face. Specifically, it was me in a sponsored Instagram Story ad, putting on a lip balm. In the video, I applied the balm and smiled at the camera, looking pleased with my newly moisturized lips. In real life, I was confused. I had never agreed to appear in a nationwide social campaign, otherwise my checking account would have a couple more zeroes to show for it. I worked in the media industry then, sourcing the right influencers to participate in sponsored articles. I’ve spent years casting with talent, negotiating contracts to ensure fair compensation and modest usage rights for influencers, models, and real people. Based on my experience, it was clear that my image was being exploited by a multibillion dollar brand.

… companies like HelloFresh and Canon are now prioritizing the niche audiences of micro- and nano-creators. Research shows that shoppers find smaller creators “more authentic” and brands have identified those creators as “less costly,” making regular people a win-win for boosting sales.

137

10 year old boy lives life as a YouTube advertising sensation

There’s no one way to describe what Kaji, who is now 10 years old, has done across his multiple YouTube channels, cable television shows and live appearances: In one video, he is giving you a tour of the Legoland Hotel; in another, he splashes around in his pool to introduce a science video about tsunamis. But for years, what he has mostly done is play with toys: Thomas the Tank Engine, “Paw Patrol” figures, McDonald’s play kitchens. A new toy and a new video for almost every day of the week, adding up to an avalanche of content that can overwhelm your child’s brain, click after click. Kaji has been playing with toys on camera since Barack Obama was in the White House.

Here are a few of the companies that are now paying him handsomely for his services: Amazon, Walmart, Nickelodeon, Skechers. Ryan also has 10 separate YouTube channels, which together make up “Ryan’s World” [31.2M subscribers], a content behemoth whose branded merchandise took in more than $250 million last year. Even conservative estimates suggest that the Kaji family take exceeds $25 million annually.

298

AI influencers are taking over

297

YouTube Stars Were Offered Money to Spread Vaccine Misinformation

“A mysterious marketing agency secretly offered to pay social media stars to spread disinformation about Covid-19 vaccines,” reports the BBC.

“Their plan failed when the influencers went public about the attempt to recruit them.”
An influencer marketing agency called Fazze offered to pay [Mirko Drotschmann, a German YouTuber and journalist] to promote what it said was leaked information that suggested the death rate among people who had the Pfizer vaccine was almost three times that of the AstraZeneca jab. The information provided wasn’t true. It quickly became apparent to Mirko that he was being asked to spread disinformation to undermine public confidence in vaccines in the middle of a pandemic. “I was shocked,” says Mirko “then I was curious, what’s behind all that?” In France, science YouTuber Léo Grasset received a similar offer. The agency offered him 2000 euros if he would take part.

Fazze said it was acting for a client who wished to remain anonymous…

Both Léo and Mirko were appalled by the false claims. They pretended to be interested in order to try to find out more and were provided with detailed instructions about what they should say in their videos. In stilted English, the brief instructed them to “Act like you have the passion and interest in this topic.” It told them not to mention the video had a sponsor — and instead pretend they were spontaneously giving advice out of concern for their viewers… Since Léo and Mirko blew the whistle at least four other influencers in France and Germany have gone public to reveal they also rejected Fazze’s attempts to recruit them.

But German journalist, Daniel Laufer, has identified two influencers who may have taken up the offer.

But who’s behind the mysterious influencer marketing agency?
Fazze is a part of AdNow, which is a digital marketing company, registered in both Russia and the UK. The BBC has made multiple attempts to contact AdNow by phone, email and even a letter couriered to their Moscow headquarters, but they have not responded. Eventually we managed to contact Ewan Tolladay, one of two directors of the British arm of AdNow – who lives in Durham. Mr. Tolladay said he had very little to do with Fazze — which he said was a joint venture between his fellow director — a Russian man called Stanislav Fesenko — and another person whose identity he didn’t know… Both the French and German authorities have launched investigations into Fazze’s approaches to influencers. But the identity of the agency’s mystery client remains unclear.

There has been speculation about the Russian connections to this scandal and the interests of the Russian state in promoting its own vaccine — Sputnik V.

French YouTuber Léo Grasset believes we’ll see more attempts to manipulate public opinion, especially young people — apparently because it’s incredibly easy.

“Just spend the same money on TikTok creators, YouTube creators,” they tell the BBC. “The whole ecosystem is perfectly built for maximum efficiency of disinformation right now.”

335

TikTok sued for billions over use of children’s data

Lawyers will allege that TikTok takes children’s personal information, including phone numbers, videos, exact location and biometric data, without sufficient warning, transparency or the necessary consent required by law, and without children or parents knowing what is being done with that information. TikTok has more than 800 million users worldwide and parent firm ByteDance made billions in profits last year, with the vast majority of that coming via advertising revenue.

364

TikTok: Rampant product placement

In the world of TikTok, brands have the opportunity to get products out into the real world – or make stories of them already being out there. The platform turns placement into consumption as consumers participate – or play – with the products. Product placement on the platform could come from just giving products out to creators, or partnering with them, as is done on other platforms. However, it could also come from amplifying organic content or trends that are already happening with a brand’s products … Viewers are the stars. When it comes to distinguishing between viewers and audiences on TikTok, just as with content and ads, the lines are blurred. In fact, many TikTok users are also creators. For these creators, the feed is their stage and this where the opportunity for sponsorship and placement lies for brands.

443

Google Search and Dark Patterns

Previously, the search engine had marked paid results with the word “Ad” in a green box, tucked beneath the headline next to a matching green display URL. Now, all of a sudden, the “Ad” and the URL shifted above the headline, and both were rendered in discreet black; the box disappeared. The organic search results underwent a similar makeover, only with a new favicon next to the URL instead of the word “Ad.” The result was a general smoothing: Ads looked like not-ads. Not-ads looked like ads. This was not Google’s first time fiddling with the search results interface. In fact, it had done so quite regularly over the last 13 years, as handily laid out in a timeline from the news site Search Engine Land. Each iteration whittled away the distinction between paid and unpaid content that much more. Most changes went relatively unnoticed, internet residents accepting the creep like the apocryphal frog in a slowly boiling pot.

But in January, amid rising antitrust drumbeats and general exhaustion with Big Tech, people noticed. Interface designers, marketers, and Google users alike decried the change, saying it made paid results practically indistinguishable from those that Google’s search algorithm served up organically. The phrase that came up most often: “dark pattern,” a blanket term coined by UX specialist Harry Brignull to describe manipulative design elements that benefit companies over their users. That a small design tweak could inspire so much backlash speaks to the profound influence Google and other ubiquitous platforms have — and the responsibility that status confers to them. “Google and Facebook shape realities,” says Kat Zhou, a product designer who has created a framework and toolkit to help promote ethical design. “Students and professors turn to Google for their research. Folks turn to Facebook for political news. Communities turn to Google for Covid-19 updates. In some sense, Google and Facebook have become arbiters of the truth. That’s particularly scary when you factor in their business models, which often incentivize blurring the line between news and advertisements.”

Google’s not the only search engine to blur this line. If anything, Bing is even more opaque, sneaking the “Ad” disclosure under the header, with only a faint outline to draw attention. […] But Google has around 92 percent of global search marketshare. It effectively is online search. Dark patterns are all too common online in general, and January wasn’t the first time people accused Google of deploying them. In June of 2018, a blistering report from the Norwegian Consumer Council found that Google and Facebook both used specific interface choices to strip away user privacy at almost every turn. The study details how both platforms implemented the least privacy-friendly options by default, consistently “nudged” users toward giving away more of their data, and more. It paints a portrait of a system designed to befuddle users into complacency. […] That confusion reached its apex a few months later, when an Associated Press investigation found that disabling Location History on your smartphone did not, in fact, stop Google from collecting your location in all instances.

503

Ad Firms Are Exploring Deepfaked Commercials

“With the pandemic having shut down production, companies are asking ad agencies to create commercials made up of digitally altered footage,” reports the New York Times, citing a State Farm commercial that aired during an ESPN documentary starring the anchor of “SportsCenter,” Kenny Mayne:

The producers made the commercial by layering video of Mr. Mayne’s 60-year-old mouth onto footage of his 38-year-old face. To many viewers, the stunt provided a welcome moment of levity in depressing times. Others were made uneasy by the smoothness of the patch, describing it as a type of deepfake. “We tried to make the joke clear enough so that we weren’t tricking anyone,” said Carrie Brzezinski-Hsu, the head of ESPN CreativeWorks, which created the commercial with the ad agencies Optimum Sports and Translation.

Ms. Brzezinski-Hsu said manipulated footage was likely to appear in future ESPN ads. And executives at several major advertising agencies said they had discussed making similar commercials with their clients in recent weeks. “We’re so restricted in how we can generate content,” said Kerry Hill, the production director for the ad agency FCB in North America. “Anything that can be computer generated is something we’re going to explore.”

Husani Oakley, the chief technology officer of the ad firm Deutsch, said digitally altered ads should somehow clue viewers into the fact that what they are seeing is not completely real. “The technology is here, and it’s only going to get better and better, and we have to get used to it,” he added. “We’re exploring ways to have fun with it.”

513

Why Don’t We Just Ban Targeted Advertising?

Google and Facebook, including their subsidiaries like Instagram and YouTube, make about 83 percent and 99 percent of their respective revenue from one thing: selling ads. It’s the same story with Twitter and other free sites and apps. More to the point, these companies are in the business of what’s called behavioral advertising, which allows companies to aim their marketing based on everything from users’ sexual orientations to their moods and menstrual cycles, as revealed by everything they do on their devices and every place they take them. It follows that most of the unsavory things the platforms do—boost inflammatory content, track our whereabouts, enable election manipulation, crush the news industry—stem from the goal of boosting ad revenues. Instead of trying to clean up all these messes one by one, the logic goes, why not just remove the underlying financial incentive? Targeting ads based on individual user data didn’t even really exist until the past decade. (Indeed, Google still makes many billions of dollars from ads tied to search terms, which aren’t user-specific.) What if companies simply weren’t allowed to do it anymore?

Let’s pretend it really happened. Imagine Congress passed a law tomorrow morning that banned companies from doing any ad microtargeting whatsoever. Close your eyes and picture what life would be like if the leading business model of the internet were banished from existence. How would things be different?

Many of the changes would be subtle. You could buy a pair of shoes on Amazon without Reebok ads following you for months. Perhaps you’d see some listings that you didn’t see before, for jobs or real estate. That’s especially likely if you’re African-American, or a woman, or a member of another disadvantaged group. You might come to understand that microtargeting had supercharged advertisers’ ability to discriminate, even when they weren’t trying to.

572

Leaked Documents Show Facebook Leveraged User Data To Help Friends and Fight Rivals

A cache of leaked Facebook documents shows how the company’s CEO, Mark Zuckerberg, oversaw plans to consolidate the social network’s power and control competitors by treating its users’ data as a bargaining chip. This trove comprises approximately 7,000 pages in total, of which about 4,000 are internal Facebook communications such as emails, web chats, notes, presentations and spreadsheets, primarily from 2011 to 2015. About 1,200 pages are marked as “highly confidential.” Taken together, they show how Zuckerberg, along with his board and management team, found ways to tap Facebook users’ data — including information about friends, relationships and photos — as leverage over the companies it partnered with. In some cases, Facebook would reward partners by giving them preferential access to certain types of user data while denying the same access to rival companies.

For example, Facebook gave Amazon special access to user data because it was spending money on Facebook advertising. In another case the messaging app MessageMe was cut off from access to data because it had grown too popular and could compete with Facebook. All the while, Facebook planned to publicly frame these moves as a way to protect user privacy, the documents show. State and federal authorities are now closely scrutinizing Facebook’s business practices. In October, New York Attorney General Letitia James announced that 47 attorneys general from states and U.S. territories plan to take part in a New York-led antitrust probe into Facebook. Over the summer, the House Judiciary Committee held hearings over antitrust concerns in Silicon Valley while the Federal Trade Commission also continues to examine the firm’s practices.

502

Mozilla Hits Google, Facebook For ‘Microtargeting’ Political Ads

Microtargeting, a method which uses consumer data and demographics to narrowly segment audiences, is used by political campaigns to specialize ads for different voting groups. The practice’s critics include Federal Election Commission Chairwoman Ellen Weintraub, who wrote in a Washington Post op-ed that microtargeting makes it “easy to single out susceptible groups and direct political misinformation to them with little accountability, because the public at large never sees the ad.” Mozilla’s call follows reports that Facebook has considered restricting politicians’ access to microtargeting.

564

I worked on political ads at Facebook. They profit by manipulating us.

Yael Eisenstat, a visiting fellow at Cornell Tech in the Digital Life Initiative and a former elections integrity head at Facebook, CIA officer, and White House adviser, writes for the Washington Post:

As the company continues to struggle with how to handle political content and as another presidential election approaches, it’s clear that tinkering around the margins of advertising policies won’t fix the most serious issues. The real problem is that Facebook profits partly by amplifying lies and selling dangerous targeting tools that allow political operatives to engage in a new level of information warfare. Its business model exploits our data to let advertisers custom-target people, show us each a different version of the truth and manipulate us with hyper-customized ads — ads that, as of two weeks ago, can contain blatantly false and debunked information if they’re run by a political campaign. As long as Facebook prioritizes profit over healthy discourse, they can’t avoid damaging democracies.

Early in my time there, I dug into the question of misinformation in political advertising. Posting in a “tribe” (Facebook’s internal collaboration platform), I asked our teams working on political advertising whether we should incorporate the same tools for political ads that other integrity teams at Facebook were developing to address misinformation in pages and organic posts. It was unclear to me why the company was applying different, siloed policies and tools across the platform. Most users do not differentiate organic content from ads — as I clearly saw on a trip to India, where we were testing our ads-integrity products — so why were we expecting users to understand that we applied different standards to different forms of content that all just appear in their news feeds?

The fact that we were taking money for political ads and allowing campaigns and other political organizations to target users based on the vast amounts of data we had gathered meant political ads should have an even higher bar for integrity than what people were posting in organic content. We verified advertisers to run political ads, giving them a check mark and a “paid for by” label, and I questioned if that gave the false impression that we were vouching for the validity of the content, boosting its perceived credibility even though we weren’t checking any facts or ensuring that ads weren’t spreading false information. Most of my colleagues agreed. People wanted to get this right. But above me, there was no appetite for my pushing, and I was accused of “creating confusion.”

517

Children ‘Interested in’ Gambling and Alcohol, According To Facebook

The social network’s advertising tools reveal 740,000 children under the age of 18 are flagged as being interested in gambling, including 130,000 in the UK. Some 940,000 minors — 150,000 of whom are British — are flagged as being interested in alcoholic beverages. These “interests” are automatically generated by Facebook, based on what it has learned about a user by monitoring their activity on the social network. Advertisers can then use them to specifically target messages to subgroups who have been flagged as interested in the topic. In a statement, Facebook said: “We don’t allow ads that promote the sale of alcohol or gambling to minors on Facebook and we enforce against this activity when we find it. We also work closely with regulators to provide guidance for marketers to help them reach their audiences effectively and responsibly.” The company does allow advertisers to specifically target messages to children based on their interest in alcohol or gambling. A Facebook insider gave the example of an anti-gambling service that may want to reach out to children who potentially have a gambling problem and offer them help and support.

545

Twitter Took Phone Numbers for Security and Used Them for Advertising

When some users provided Twitter with their phone number to make their account more secure, the company used this information for advertising purposes, the company said today.

This isn’t the first time that a large social network has taken information explicitly meant for the purposes of security, and then quietly or accidentally use it for something else entirely. Facebook did something similar with phone numbers provided by users for two-factor authentication, the company confirmed last year. “We recently discovered that when you provided an email address or phone number for safety or security purposes (for example, two-factor authentication) this data may have inadvertently been used for advertising purposes, specifically in our Tailored Audiences and Partner Audiences advertising system,” Twitter’s announcement reads. In short, when an advertiser using Twitter uploaded their own marketing list of email addresses or phone numbers, Twitter may have matched the list to people on Twitter “based on the email or phone number the Twitter account holder provided for safety and security purposes,” the post adds.

531

Can Google Ads Change Your Beliefs?

There are advertisers in the digital marketing industry who want to find out how effective this new form of social engineering is. One of those advertisers is Google. Redirect Method was a Google-incubated project that used redirect ads to deradicalize would-be extremists. In the first eight weeks of 2016, some 320,000 people — all of whom were believed to harbor sympathy toward the Islamic State — clicked on ads designed to reflect an interest in extremist content. Instead of arriving at a page that supported their views, Islamic State sympathizers who clicked the ads found themselves directed to a playlist of videos debunking the terror group’s recruitment narratives. Most of the visitors stuck around. Together, they watched more than half a million minutes of video.

After the ISIS campaign ended, Google left behind a blueprint.

The blueprint shows, step by step, how you can create your own redirect ads to sway any belief or opinion — held by any Google user, anywhere in the world — of your choice. You don’t have to be a marketer with years of experience to do this. You just need to follow the instructions and put up a credit card (a few hundred bucks will suffice). Recently, I followed the blueprint and created a redirect campaign of my own… The goal of my first redirect campaign was to sway the ideology of suicidal people.

Nearly one in three apparently suicidal searchers who viewed his ad then dialed his hotline number — which then forwarded the calls to the National Suicide Prevention Lifeline. But he expressed surprise that Google “let me run the ads with no issue… I didn’t need qualifications to be a conduit of peoples’ fates.” He later tried creating another campaign for prospective school shooters, “but the conversion rates were low. These prospective shooters were reluctant to speak with someone.”

Yet one study found that more than 50% of people using Google couldn’t distinguish between ads and “organic” results on Google.

599

Google Chrome ‘Has Become Surveillance Software. It’s Time to Switch’

“You open your browser to look at the Web. Do you know who is looking back at you?” warns Washington Post technology columnist Geoffrey A. Fowler.

Over a recent week of Web surfing, I peered under the hood of Google Chrome and found it brought along a few thousand friends. Shopping, news and even government sites quietly tagged my browser to let ad and data companies ride shotgun while I clicked around the Web. This was made possible by the Web’s biggest snoop of all: Google. Seen from the inside, its Chrome browser looks a lot like surveillance software…

My tests of Chrome vs. Firefox unearthed a personal data caper of absurd proportions. In a week of Web surfing on my desktop, I discovered 11,189 requests for tracker “cookies” that Chrome would have ushered right onto my computer but were automatically blocked by Firefox. These little files are the hooks that data firms, including Google itself, use to follow what websites you visit so they can build profiles of your interests, income and personality… And that’s not the half of it. Look in the upper right corner of your Chrome browser. See a picture or a name in the circle? If so, you’re logged in to the browser, and Google might be tapping into your Web activity to target ads. Don’t recall signing in? I didn’t, either. Chrome recently started doing that automatically when you use Gmail.

Chrome is even sneakier on your phone. If you use Android, Chrome sends Google your location every time you conduct a search. (If you turn off location sharing it still sends your coordinates out, just with less accuracy.)

The columnist concludes that “having the world’s biggest advertising company make the most popular Web browser was about as smart as letting kids run a candy shop,” and argues that through its Doubleclick and other ad businesses, Google “is the No. 1 cookie maker — the Mrs. Fields of the web.”

627

Millions of Business Listings On Google Maps Are Fake — and Google Profits

Google’s ubiquitous internet platform shapes what’s real and what isn’t for more than two billion monthly users. Yet Google Maps is overrun with millions of false business addresses and fake names, according to advertisers, search experts and current and former Google employees. The ruse lures the unsuspecting to what appear to be Google-suggested local businesses, a costly and dangerous deception. Once considered a sleepy, low-margin business by the company and known mostly for giving travel directions, Google Maps in recent months has packed more ads onto its search queries. It is central to Google parent Alphabet’s hope to recharge a cresting digital-advertising operation.

Often, Google Maps yields mirages, visible in local business searches of U.S. cities, including Mountain View, Calif., Google’s hometown. Of a dozen addresses for personal-injury attorneys on Google Maps during a recent search, only one office was real. A Viennese patisserie was among the businesses at addresses purported to house lawyers. The fakes vanished after inquiries to Google from The Wall Street Journal. The false listings benefit businesses seeking more customer calls by sprinkling made-up branches in various corners of a city. In other cases, as Ms. Carter discovered, calls to listed phone numbers connect to unscrupulous competitors, a misdirection forbidden by Google rules but sporadically policed by the company. Hundreds of thousands of false listings sprout on Google Maps each month, according to experts. Google says it catches many others before they appear.

607

Facebook Ad Platform Could Be Inherently Discriminatory, Researchers Say

Researchers from Northeastern Unviersity, the University of Southern Carolina, and tech accountability non-profit Upturn have released a paper that says Facebook’s ad delivery system itself can steer ads intended to be inclusive toward discrimination without explicit intent. “In a paper titled, ‘Discrimination through optimization: How Facebook’s ad delivery can lead to skewed outcomes,’ co-authors Muhammad Ali, Piotr Sapiezynski, Miranda Bogen, Aleksandra Korolova, Alan Mislove, and Aaron Rieke find that advertiser budgets and ad content affect ad delivery, skewing it along gender and racial lines even when neutral ad targeting settings are used,” reports The Register. From the report:
The researchers found that Facebook ads tend to be shown to men because women tend to click on ads more often, making them more expensive to reach through Facebook’s system. That divide becomes apparent when ad budgets are compared, because the ad budget affects ad distribution. As the paper explains, “the higher the daily budget, the smaller the fraction of men in the audience.” Such segregation may be appropriate and desirable for certain types of marketing pitches, but when applied to credit, employment and housing ads, the consequences can be problematic.

Ad content — text and images — also has a strong effect on whether ads get shown to men or women, even when the bidding strategy is the same and gender-agnostic targeting is used. In particular, the researchers found images had a surprisingly large effect on ad delivery. Ad URL destination has some effect — an ad pointing to a bodybuilding site and an ad pointing to a cosmetics site had a baseline delivery distribution of 48 percent men and 40 percent men respectively. The addition of a title and headline doesn’t change that much. But once the researchers added an image to the ad, the distribution pattern changed, with the bodybuilding site ad reaching an audience that was 75 percent male and the cosmetics ad reaching an audience that was 90 percent female. According to the researchers, their tests suggest, “Facebook has an automated image classification mechanism in place that is used to steer different ads towards different subsets of the user population.”

“In terms of credit, employment and housing ads, the problem with this system is that it discriminates where it shouldn’t: Five ads for lumber industry jobs were delivered to an audience that was more than 90 percent men and more than 70 percent white; five ads for janitorial work were delivered to an audience that was more than 65 percent women and 75 percent black,” the report adds. “Housing ads also showed a racial skew.”

The latest findings come after years of criticism of Facebook’s ad system. Last month, Facebook announced changes to the platform intended to prevent advertisers from deploying unfair credit, employment and housing ads. One week later, the U.S. Department of Housing and Urban Development sued Facebook for violating the Fair Housing Act.

626

Two-Thirds of Consumers Don’t Expect Google To Track Them the Way It Does

Last summer, an AP investigation found that Google’s location tracking remains on even if you turn it off in Google Maps, Search, and other apps. Research from Vanderbilt professor Douglas Schmidt found that Google engages in “passive” data collection, often without the user’s knowledge. His research also showed that Google utilizes data collected from other sources to de-anonymize existing user data.

Digital Content Next, the trade association of online publishers, surveyed a nationally representative sample to find out what people expect from Google — and, as with a similar study we conducted last year about Facebook, the results were unsettling.

Our findings show that many of Google’s data practices deviate from consumer expectations. We find it even more significant that consumer’s expectations are at an all-time low even after 2018, a year in which awareness around consumer privacy reached peak heights.

The results of the study are consistent with our Facebook study: People don’t want surveillance advertising. A majority of consumers indicated they don’t expect to be tracked across Google’s services, let alone be tracked across the web in order to make ads more targeted.

There was only one question where a small majority of respondents felt that Google was acting according to their expectations. That was about Google merging data from search queries with other data it collects on its own services. They also don’t expect Google to connect the data back to the user’s personal account, but only by a small majority. Google began doing both of these in 2016 after previously promising it wouldn’t.

593