Resources

Google Threatened Tech Influencers Unless They ‘Preferred’ Google Phones

The tech review world has been full of murky deals between companies and influencers for years, but it appears Google finally crossed a line with the Pixel 9. The company’s invite-only Team Pixel program — which seeds Pixel products to influencers before public availability — stipulated that participating influencers were not allowed to feature Pixel products alongside competitors, and those who showed a preference for competing phones risked being kicked out of the program. For those hoping to break into the world of tech reviews, the new terms meant having to choose between keeping access or keeping their integrity.

The Verge has independently confirmed screenshots of the clause in this year’s Team Pixel agreement for the new Pixel phones, which various influencers began posting on X and Threads last night. The agreement tells participants they’re “expected to feature the Google Pixel device in place of any competitor mobile devices.” It also notes that “if it appears other brands are being preferred over the Pixel, we will need to cease the relationship between the brand and the creator.” The link to the form appears to have since been shut down.

64

How the Pentagon Learned To Use Targeted Ads To Find Its Targets

In 2019, a government contractor and technologist named Mike Yeagley began making the rounds in Washington, DC. He had a blunt warning for anyone in the country’s national security establishment who would listen: The US government had a Grindr problem. A popular dating and hookup app, Grindr relied on the GPS capabilities of modern smartphones to connect potential partners in the same city, neighborhood, or even building. The app can show how far away a potential partner is in real time, down to the foot. But to Yeagley, Grindr was something else: one of the tens of thousands of carelessly designed mobile phone apps that leaked massive amounts of data into the opaque world of online advertisers. That data, Yeagley knew, was easily accessible by anyone with a little technical know-how. So Yeagley — a technology consultant then in his late forties who had worked in and around government projects nearly his entire career — made a PowerPoint presentation and went out to demonstrate precisely how that data was a serious national security risk.

As he would explain in a succession of bland government conference rooms, Yeagley was able to access the geolocation data on Grindr users through a hidden but ubiquitous entry point: the digital advertising exchanges that serve up the little digital banner ads along the top of Grindr and nearly every other ad-supported mobile app and website. This was possible because of the way online ad space is sold, through near-instantaneous auctions in a process called real-time bidding. Those auctions were rife with surveillance potential. You know that ad that seems to follow you around the internet? It’s tracking you in more ways than one. In some cases, it’s making your precise location available in near-real time to both advertisers and people like Mike Yeagley, who specialized in obtaining unique data sets for government agencies.

Working with Grindr data, Yeagley began drawing geofences — creating virtual boundaries in geographical data sets — around buildings belonging to government agencies that do national security work. That allowed Yeagley to see what phones were in certain buildings at certain times, and where they went afterwards. He was looking for phones belonging to Grindr users who spent their daytime hours at government office buildings. If the device spent most workdays at the Pentagon, the FBI headquarters, or the National Geospatial-Intelligence Agency building at Fort Belvoir, for example, there was a good chance its owner worked for one of those agencies. Then he started looking at the movement of those phones through the Grindr data. When they weren’t at their offices, where did they go? A small number of them had lingered at highway rest stops in the DC area at the same time and in proximity to other Grindr users — sometimes during the workday and sometimes while in transit between government facilities. For other Grindr users, he could infer where they lived, see where they traveled, even guess at whom they were dating.

Intelligence agencies have a long and unfortunate history of trying to root out LGBTQ Americans from their workforce, but this wasn’t Yeagley’s intent. He didn’t want anyone to get in trouble. No disciplinary actions were taken against any employee of the federal government based on Yeagley’s presentation. His aim was to show that buried in the seemingly innocuous technical data that comes off every cell phone in the world is a rich story — one that people might prefer to keep quiet. Or at the very least, not broadcast to the whole world. And that each of these intelligence and national security agencies had employees who were recklessly, if obliviously, broadcasting intimate details of their lives to anyone who knew where to look. As Yeagley showed, all that information was available for sale, for cheap. And it wasn’t just Grindr, but rather any app that had access to a user’s precise location — other dating apps, weather apps, games. Yeagley chose Grindr because it happened to generate a particularly rich set of data and its user base might be uniquely vulnerable.
The report goes into great detail about how intelligence and data analysis techniques, notably through a program called Locomotive developed by PlanetRisk, enabled the tracking of mobile devices associated with Russian President Vladimir Putin’s entourage. By analyzing commercial adtech data, including precise geolocation information collected from mobile advertising bid requests, analysts were able to monitor the movements of phones that frequently accompanied Putin, indicating the locations and movements of his security personnel, aides, and support staff.

This capability underscored the surveillance potential of commercially available data, providing insights into the activities and security arrangements of high-profile individuals without directly compromising their personal devices.

119

‘Deepfakes’ of Celebrities Have Begun Appearing in Ads, With or Without Their Permission

Celebrity deepfakes are coming to advertising. Among the recent entries: Last year, Russian telecommunications company MegaFon released a commercial in which a simulacrum of Hollywood legend Bruce Willis helps defuse a bomb. Just last week, Elon Musk seemed to star in a marketing video from real-estate investment startup reAlpha Tech. And last month a promotional video for machine-learning firm Paperspace showed talking semblances of the actors Tom Cruise and Leonardo DiCaprio. None of these celebrities ever spent a moment filming these campaigns. In the cases of Messrs. Musk, Cruise and DiCaprio, they never even agreed to endorse the companies in question. All the videos of digital simulations were created with so-called deepfake technology, which uses computer-generated renditions to make the Hollywood and business notables say and do things they never actually said or did.

Some of the ads are broad parodies, and the meshing of the digital to the analog in the best of cases might not fool an alert viewer. Even so, the growing adoption of deepfake software could eventually shape the industry in profound ways while creating new legal and ethical questions, experts said. Authorized deepfakes could allow marketers to feature huge stars in ads without requiring them to actually appear on-set or before cameras, bringing down costs and opening new creative possibilities. But unauthorized, they create a legal gray area: Celebrities could struggle to contain a proliferation of unauthorized digital reproductions of themselves and the manipulation of their brand and reputation, experts said.

275

Who owns the rights to your face?

When we create our social media accounts, we agree to grant those platforms a free license to use our content as they wish.

It was my face. Specifically, it was me in a sponsored Instagram Story ad, putting on a lip balm. In the video, I applied the balm and smiled at the camera, looking pleased with my newly moisturized lips. In real life, I was confused. I had never agreed to appear in a nationwide social campaign, otherwise my checking account would have a couple more zeroes to show for it. I worked in the media industry then, sourcing the right influencers to participate in sponsored articles. I’ve spent years casting with talent, negotiating contracts to ensure fair compensation and modest usage rights for influencers, models, and real people. Based on my experience, it was clear that my image was being exploited by a multibillion dollar brand.

… companies like HelloFresh and Canon are now prioritizing the niche audiences of micro- and nano-creators. Research shows that shoppers find smaller creators “more authentic” and brands have identified those creators as “less costly,” making regular people a win-win for boosting sales.

247

10 year old boy lives life as a YouTube advertising sensation

There’s no one way to describe what Kaji, who is now 10 years old, has done across his multiple YouTube channels, cable television shows and live appearances: In one video, he is giving you a tour of the Legoland Hotel; in another, he splashes around in his pool to introduce a science video about tsunamis. But for years, what he has mostly done is play with toys: Thomas the Tank Engine, “Paw Patrol” figures, McDonald’s play kitchens. A new toy and a new video for almost every day of the week, adding up to an avalanche of content that can overwhelm your child’s brain, click after click. Kaji has been playing with toys on camera since Barack Obama was in the White House.

Here are a few of the companies that are now paying him handsomely for his services: Amazon, Walmart, Nickelodeon, Skechers. Ryan also has 10 separate YouTube channels, which together make up “Ryan’s World” [31.2M subscribers], a content behemoth whose branded merchandise took in more than $250 million last year. Even conservative estimates suggest that the Kaji family take exceeds $25 million annually.

405

AI influencers are taking over

418

YouTube Stars Were Offered Money to Spread Vaccine Misinformation

“A mysterious marketing agency secretly offered to pay social media stars to spread disinformation about Covid-19 vaccines,” reports the BBC.

“Their plan failed when the influencers went public about the attempt to recruit them.”
An influencer marketing agency called Fazze offered to pay [Mirko Drotschmann, a German YouTuber and journalist] to promote what it said was leaked information that suggested the death rate among people who had the Pfizer vaccine was almost three times that of the AstraZeneca jab. The information provided wasn’t true. It quickly became apparent to Mirko that he was being asked to spread disinformation to undermine public confidence in vaccines in the middle of a pandemic. “I was shocked,” says Mirko “then I was curious, what’s behind all that?” In France, science YouTuber Léo Grasset received a similar offer. The agency offered him 2000 euros if he would take part.

Fazze said it was acting for a client who wished to remain anonymous…

Both Léo and Mirko were appalled by the false claims. They pretended to be interested in order to try to find out more and were provided with detailed instructions about what they should say in their videos. In stilted English, the brief instructed them to “Act like you have the passion and interest in this topic.” It told them not to mention the video had a sponsor — and instead pretend they were spontaneously giving advice out of concern for their viewers… Since Léo and Mirko blew the whistle at least four other influencers in France and Germany have gone public to reveal they also rejected Fazze’s attempts to recruit them.

But German journalist, Daniel Laufer, has identified two influencers who may have taken up the offer.

But who’s behind the mysterious influencer marketing agency?
Fazze is a part of AdNow, which is a digital marketing company, registered in both Russia and the UK. The BBC has made multiple attempts to contact AdNow by phone, email and even a letter couriered to their Moscow headquarters, but they have not responded. Eventually we managed to contact Ewan Tolladay, one of two directors of the British arm of AdNow – who lives in Durham. Mr. Tolladay said he had very little to do with Fazze — which he said was a joint venture between his fellow director — a Russian man called Stanislav Fesenko — and another person whose identity he didn’t know… Both the French and German authorities have launched investigations into Fazze’s approaches to influencers. But the identity of the agency’s mystery client remains unclear.

There has been speculation about the Russian connections to this scandal and the interests of the Russian state in promoting its own vaccine — Sputnik V.

French YouTuber Léo Grasset believes we’ll see more attempts to manipulate public opinion, especially young people — apparently because it’s incredibly easy.

“Just spend the same money on TikTok creators, YouTube creators,” they tell the BBC. “The whole ecosystem is perfectly built for maximum efficiency of disinformation right now.”

435

TikTok sued for billions over use of children’s data

Lawyers will allege that TikTok takes children’s personal information, including phone numbers, videos, exact location and biometric data, without sufficient warning, transparency or the necessary consent required by law, and without children or parents knowing what is being done with that information. TikTok has more than 800 million users worldwide and parent firm ByteDance made billions in profits last year, with the vast majority of that coming via advertising revenue.

459

TikTok: Rampant product placement

In the world of TikTok, brands have the opportunity to get products out into the real world – or make stories of them already being out there. The platform turns placement into consumption as consumers participate – or play – with the products. Product placement on the platform could come from just giving products out to creators, or partnering with them, as is done on other platforms. However, it could also come from amplifying organic content or trends that are already happening with a brand’s products … Viewers are the stars. When it comes to distinguishing between viewers and audiences on TikTok, just as with content and ads, the lines are blurred. In fact, many TikTok users are also creators. For these creators, the feed is their stage and this where the opportunity for sponsorship and placement lies for brands.

538

Google Search and Dark Patterns

Previously, the search engine had marked paid results with the word “Ad” in a green box, tucked beneath the headline next to a matching green display URL. Now, all of a sudden, the “Ad” and the URL shifted above the headline, and both were rendered in discreet black; the box disappeared. The organic search results underwent a similar makeover, only with a new favicon next to the URL instead of the word “Ad.” The result was a general smoothing: Ads looked like not-ads. Not-ads looked like ads. This was not Google’s first time fiddling with the search results interface. In fact, it had done so quite regularly over the last 13 years, as handily laid out in a timeline from the news site Search Engine Land. Each iteration whittled away the distinction between paid and unpaid content that much more. Most changes went relatively unnoticed, internet residents accepting the creep like the apocryphal frog in a slowly boiling pot.

But in January, amid rising antitrust drumbeats and general exhaustion with Big Tech, people noticed. Interface designers, marketers, and Google users alike decried the change, saying it made paid results practically indistinguishable from those that Google’s search algorithm served up organically. The phrase that came up most often: “dark pattern,” a blanket term coined by UX specialist Harry Brignull to describe manipulative design elements that benefit companies over their users. That a small design tweak could inspire so much backlash speaks to the profound influence Google and other ubiquitous platforms have — and the responsibility that status confers to them. “Google and Facebook shape realities,” says Kat Zhou, a product designer who has created a framework and toolkit to help promote ethical design. “Students and professors turn to Google for their research. Folks turn to Facebook for political news. Communities turn to Google for Covid-19 updates. In some sense, Google and Facebook have become arbiters of the truth. That’s particularly scary when you factor in their business models, which often incentivize blurring the line between news and advertisements.”

Google’s not the only search engine to blur this line. If anything, Bing is even more opaque, sneaking the “Ad” disclosure under the header, with only a faint outline to draw attention. […] But Google has around 92 percent of global search marketshare. It effectively is online search. Dark patterns are all too common online in general, and January wasn’t the first time people accused Google of deploying them. In June of 2018, a blistering report from the Norwegian Consumer Council found that Google and Facebook both used specific interface choices to strip away user privacy at almost every turn. The study details how both platforms implemented the least privacy-friendly options by default, consistently “nudged” users toward giving away more of their data, and more. It paints a portrait of a system designed to befuddle users into complacency. […] That confusion reached its apex a few months later, when an Associated Press investigation found that disabling Location History on your smartphone did not, in fact, stop Google from collecting your location in all instances.

579

Ad Firms Are Exploring Deepfaked Commercials

“With the pandemic having shut down production, companies are asking ad agencies to create commercials made up of digitally altered footage,” reports the New York Times, citing a State Farm commercial that aired during an ESPN documentary starring the anchor of “SportsCenter,” Kenny Mayne:

The producers made the commercial by layering video of Mr. Mayne’s 60-year-old mouth onto footage of his 38-year-old face. To many viewers, the stunt provided a welcome moment of levity in depressing times. Others were made uneasy by the smoothness of the patch, describing it as a type of deepfake. “We tried to make the joke clear enough so that we weren’t tricking anyone,” said Carrie Brzezinski-Hsu, the head of ESPN CreativeWorks, which created the commercial with the ad agencies Optimum Sports and Translation.

Ms. Brzezinski-Hsu said manipulated footage was likely to appear in future ESPN ads. And executives at several major advertising agencies said they had discussed making similar commercials with their clients in recent weeks. “We’re so restricted in how we can generate content,” said Kerry Hill, the production director for the ad agency FCB in North America. “Anything that can be computer generated is something we’re going to explore.”

Husani Oakley, the chief technology officer of the ad firm Deutsch, said digitally altered ads should somehow clue viewers into the fact that what they are seeing is not completely real. “The technology is here, and it’s only going to get better and better, and we have to get used to it,” he added. “We’re exploring ways to have fun with it.”

587

Why Don’t We Just Ban Targeted Advertising?

Google and Facebook, including their subsidiaries like Instagram and YouTube, make about 83 percent and 99 percent of their respective revenue from one thing: selling ads. It’s the same story with Twitter and other free sites and apps. More to the point, these companies are in the business of what’s called behavioral advertising, which allows companies to aim their marketing based on everything from users’ sexual orientations to their moods and menstrual cycles, as revealed by everything they do on their devices and every place they take them. It follows that most of the unsavory things the platforms do—boost inflammatory content, track our whereabouts, enable election manipulation, crush the news industry—stem from the goal of boosting ad revenues. Instead of trying to clean up all these messes one by one, the logic goes, why not just remove the underlying financial incentive? Targeting ads based on individual user data didn’t even really exist until the past decade. (Indeed, Google still makes many billions of dollars from ads tied to search terms, which aren’t user-specific.) What if companies simply weren’t allowed to do it anymore?

Let’s pretend it really happened. Imagine Congress passed a law tomorrow morning that banned companies from doing any ad microtargeting whatsoever. Close your eyes and picture what life would be like if the leading business model of the internet were banished from existence. How would things be different?

Many of the changes would be subtle. You could buy a pair of shoes on Amazon without Reebok ads following you for months. Perhaps you’d see some listings that you didn’t see before, for jobs or real estate. That’s especially likely if you’re African-American, or a woman, or a member of another disadvantaged group. You might come to understand that microtargeting had supercharged advertisers’ ability to discriminate, even when they weren’t trying to.

651

Leaked Documents Show Facebook Leveraged User Data To Help Friends and Fight Rivals

A cache of leaked Facebook documents shows how the company’s CEO, Mark Zuckerberg, oversaw plans to consolidate the social network’s power and control competitors by treating its users’ data as a bargaining chip. This trove comprises approximately 7,000 pages in total, of which about 4,000 are internal Facebook communications such as emails, web chats, notes, presentations and spreadsheets, primarily from 2011 to 2015. About 1,200 pages are marked as “highly confidential.” Taken together, they show how Zuckerberg, along with his board and management team, found ways to tap Facebook users’ data — including information about friends, relationships and photos — as leverage over the companies it partnered with. In some cases, Facebook would reward partners by giving them preferential access to certain types of user data while denying the same access to rival companies.

For example, Facebook gave Amazon special access to user data because it was spending money on Facebook advertising. In another case the messaging app MessageMe was cut off from access to data because it had grown too popular and could compete with Facebook. All the while, Facebook planned to publicly frame these moves as a way to protect user privacy, the documents show. State and federal authorities are now closely scrutinizing Facebook’s business practices. In October, New York Attorney General Letitia James announced that 47 attorneys general from states and U.S. territories plan to take part in a New York-led antitrust probe into Facebook. Over the summer, the House Judiciary Committee held hearings over antitrust concerns in Silicon Valley while the Federal Trade Commission also continues to examine the firm’s practices.

590

Mozilla Hits Google, Facebook For ‘Microtargeting’ Political Ads

Microtargeting, a method which uses consumer data and demographics to narrowly segment audiences, is used by political campaigns to specialize ads for different voting groups. The practice’s critics include Federal Election Commission Chairwoman Ellen Weintraub, who wrote in a Washington Post op-ed that microtargeting makes it “easy to single out susceptible groups and direct political misinformation to them with little accountability, because the public at large never sees the ad.” Mozilla’s call follows reports that Facebook has considered restricting politicians’ access to microtargeting.

672

I worked on political ads at Facebook. They profit by manipulating us.

Yael Eisenstat, a visiting fellow at Cornell Tech in the Digital Life Initiative and a former elections integrity head at Facebook, CIA officer, and White House adviser, writes for the Washington Post:

As the company continues to struggle with how to handle political content and as another presidential election approaches, it’s clear that tinkering around the margins of advertising policies won’t fix the most serious issues. The real problem is that Facebook profits partly by amplifying lies and selling dangerous targeting tools that allow political operatives to engage in a new level of information warfare. Its business model exploits our data to let advertisers custom-target people, show us each a different version of the truth and manipulate us with hyper-customized ads — ads that, as of two weeks ago, can contain blatantly false and debunked information if they’re run by a political campaign. As long as Facebook prioritizes profit over healthy discourse, they can’t avoid damaging democracies.

Early in my time there, I dug into the question of misinformation in political advertising. Posting in a “tribe” (Facebook’s internal collaboration platform), I asked our teams working on political advertising whether we should incorporate the same tools for political ads that other integrity teams at Facebook were developing to address misinformation in pages and organic posts. It was unclear to me why the company was applying different, siloed policies and tools across the platform. Most users do not differentiate organic content from ads — as I clearly saw on a trip to India, where we were testing our ads-integrity products — so why were we expecting users to understand that we applied different standards to different forms of content that all just appear in their news feeds?

The fact that we were taking money for political ads and allowing campaigns and other political organizations to target users based on the vast amounts of data we had gathered meant political ads should have an even higher bar for integrity than what people were posting in organic content. We verified advertisers to run political ads, giving them a check mark and a “paid for by” label, and I questioned if that gave the false impression that we were vouching for the validity of the content, boosting its perceived credibility even though we weren’t checking any facts or ensuring that ads weren’t spreading false information. Most of my colleagues agreed. People wanted to get this right. But above me, there was no appetite for my pushing, and I was accused of “creating confusion.”

608

Children ‘Interested in’ Gambling and Alcohol, According To Facebook

The social network’s advertising tools reveal 740,000 children under the age of 18 are flagged as being interested in gambling, including 130,000 in the UK. Some 940,000 minors — 150,000 of whom are British — are flagged as being interested in alcoholic beverages. These “interests” are automatically generated by Facebook, based on what it has learned about a user by monitoring their activity on the social network. Advertisers can then use them to specifically target messages to subgroups who have been flagged as interested in the topic. In a statement, Facebook said: “We don’t allow ads that promote the sale of alcohol or gambling to minors on Facebook and we enforce against this activity when we find it. We also work closely with regulators to provide guidance for marketers to help them reach their audiences effectively and responsibly.” The company does allow advertisers to specifically target messages to children based on their interest in alcohol or gambling. A Facebook insider gave the example of an anti-gambling service that may want to reach out to children who potentially have a gambling problem and offer them help and support.

666

Twitter Took Phone Numbers for Security and Used Them for Advertising

When some users provided Twitter with their phone number to make their account more secure, the company used this information for advertising purposes, the company said today.

This isn’t the first time that a large social network has taken information explicitly meant for the purposes of security, and then quietly or accidentally use it for something else entirely. Facebook did something similar with phone numbers provided by users for two-factor authentication, the company confirmed last year. “We recently discovered that when you provided an email address or phone number for safety or security purposes (for example, two-factor authentication) this data may have inadvertently been used for advertising purposes, specifically in our Tailored Audiences and Partner Audiences advertising system,” Twitter’s announcement reads. In short, when an advertiser using Twitter uploaded their own marketing list of email addresses or phone numbers, Twitter may have matched the list to people on Twitter “based on the email or phone number the Twitter account holder provided for safety and security purposes,” the post adds.

639

Can Google Ads Change Your Beliefs?

There are advertisers in the digital marketing industry who want to find out how effective this new form of social engineering is. One of those advertisers is Google. Redirect Method was a Google-incubated project that used redirect ads to deradicalize would-be extremists. In the first eight weeks of 2016, some 320,000 people — all of whom were believed to harbor sympathy toward the Islamic State — clicked on ads designed to reflect an interest in extremist content. Instead of arriving at a page that supported their views, Islamic State sympathizers who clicked the ads found themselves directed to a playlist of videos debunking the terror group’s recruitment narratives. Most of the visitors stuck around. Together, they watched more than half a million minutes of video.

After the ISIS campaign ended, Google left behind a blueprint.

The blueprint shows, step by step, how you can create your own redirect ads to sway any belief or opinion — held by any Google user, anywhere in the world — of your choice. You don’t have to be a marketer with years of experience to do this. You just need to follow the instructions and put up a credit card (a few hundred bucks will suffice). Recently, I followed the blueprint and created a redirect campaign of my own… The goal of my first redirect campaign was to sway the ideology of suicidal people.

Nearly one in three apparently suicidal searchers who viewed his ad then dialed his hotline number — which then forwarded the calls to the National Suicide Prevention Lifeline. But he expressed surprise that Google “let me run the ads with no issue… I didn’t need qualifications to be a conduit of peoples’ fates.” He later tried creating another campaign for prospective school shooters, “but the conversion rates were low. These prospective shooters were reluctant to speak with someone.”

Yet one study found that more than 50% of people using Google couldn’t distinguish between ads and “organic” results on Google.

692

Google Chrome ‘Has Become Surveillance Software. It’s Time to Switch’

“You open your browser to look at the Web. Do you know who is looking back at you?” warns Washington Post technology columnist Geoffrey A. Fowler.

Over a recent week of Web surfing, I peered under the hood of Google Chrome and found it brought along a few thousand friends. Shopping, news and even government sites quietly tagged my browser to let ad and data companies ride shotgun while I clicked around the Web. This was made possible by the Web’s biggest snoop of all: Google. Seen from the inside, its Chrome browser looks a lot like surveillance software…

My tests of Chrome vs. Firefox unearthed a personal data caper of absurd proportions. In a week of Web surfing on my desktop, I discovered 11,189 requests for tracker “cookies” that Chrome would have ushered right onto my computer but were automatically blocked by Firefox. These little files are the hooks that data firms, including Google itself, use to follow what websites you visit so they can build profiles of your interests, income and personality… And that’s not the half of it. Look in the upper right corner of your Chrome browser. See a picture or a name in the circle? If so, you’re logged in to the browser, and Google might be tapping into your Web activity to target ads. Don’t recall signing in? I didn’t, either. Chrome recently started doing that automatically when you use Gmail.

Chrome is even sneakier on your phone. If you use Android, Chrome sends Google your location every time you conduct a search. (If you turn off location sharing it still sends your coordinates out, just with less accuracy.)

The columnist concludes that “having the world’s biggest advertising company make the most popular Web browser was about as smart as letting kids run a candy shop,” and argues that through its Doubleclick and other ad businesses, Google “is the No. 1 cookie maker — the Mrs. Fields of the web.”

722

Millions of Business Listings On Google Maps Are Fake — and Google Profits

Google’s ubiquitous internet platform shapes what’s real and what isn’t for more than two billion monthly users. Yet Google Maps is overrun with millions of false business addresses and fake names, according to advertisers, search experts and current and former Google employees. The ruse lures the unsuspecting to what appear to be Google-suggested local businesses, a costly and dangerous deception. Once considered a sleepy, low-margin business by the company and known mostly for giving travel directions, Google Maps in recent months has packed more ads onto its search queries. It is central to Google parent Alphabet’s hope to recharge a cresting digital-advertising operation.

Often, Google Maps yields mirages, visible in local business searches of U.S. cities, including Mountain View, Calif., Google’s hometown. Of a dozen addresses for personal-injury attorneys on Google Maps during a recent search, only one office was real. A Viennese patisserie was among the businesses at addresses purported to house lawyers. The fakes vanished after inquiries to Google from The Wall Street Journal. The false listings benefit businesses seeking more customer calls by sprinkling made-up branches in various corners of a city. In other cases, as Ms. Carter discovered, calls to listed phone numbers connect to unscrupulous competitors, a misdirection forbidden by Google rules but sporadically policed by the company. Hundreds of thousands of false listings sprout on Google Maps each month, according to experts. Google says it catches many others before they appear.

716