Resources

Facebook Ad Platform Could Be Inherently Discriminatory, Researchers Say

Researchers from Northeastern Unviersity, the University of Southern Carolina, and tech accountability non-profit Upturn have released a paper that says Facebook’s ad delivery system itself can steer ads intended to be inclusive toward discrimination without explicit intent. “In a paper titled, ‘Discrimination through optimization: How Facebook’s ad delivery can lead to skewed outcomes,’ co-authors Muhammad Ali, Piotr Sapiezynski, Miranda Bogen, Aleksandra Korolova, Alan Mislove, and Aaron Rieke find that advertiser budgets and ad content affect ad delivery, skewing it along gender and racial lines even when neutral ad targeting settings are used,” reports The Register. From the report:
The researchers found that Facebook ads tend to be shown to men because women tend to click on ads more often, making them more expensive to reach through Facebook’s system. That divide becomes apparent when ad budgets are compared, because the ad budget affects ad distribution. As the paper explains, “the higher the daily budget, the smaller the fraction of men in the audience.” Such segregation may be appropriate and desirable for certain types of marketing pitches, but when applied to credit, employment and housing ads, the consequences can be problematic.

Ad content — text and images — also has a strong effect on whether ads get shown to men or women, even when the bidding strategy is the same and gender-agnostic targeting is used. In particular, the researchers found images had a surprisingly large effect on ad delivery. Ad URL destination has some effect — an ad pointing to a bodybuilding site and an ad pointing to a cosmetics site had a baseline delivery distribution of 48 percent men and 40 percent men respectively. The addition of a title and headline doesn’t change that much. But once the researchers added an image to the ad, the distribution pattern changed, with the bodybuilding site ad reaching an audience that was 75 percent male and the cosmetics ad reaching an audience that was 90 percent female. According to the researchers, their tests suggest, “Facebook has an automated image classification mechanism in place that is used to steer different ads towards different subsets of the user population.”

“In terms of credit, employment and housing ads, the problem with this system is that it discriminates where it shouldn’t: Five ads for lumber industry jobs were delivered to an audience that was more than 90 percent men and more than 70 percent white; five ads for janitorial work were delivered to an audience that was more than 65 percent women and 75 percent black,” the report adds. “Housing ads also showed a racial skew.”

The latest findings come after years of criticism of Facebook’s ad system. Last month, Facebook announced changes to the platform intended to prevent advertisers from deploying unfair credit, employment and housing ads. One week later, the U.S. Department of Housing and Urban Development sued Facebook for violating the Fair Housing Act.

Two-Thirds of Consumers Don’t Expect Google To Track Them the Way It Does

Last summer, an AP investigation found that Google’s location tracking remains on even if you turn it off in Google Maps, Search, and other apps. Research from Vanderbilt professor Douglas Schmidt found that Google engages in “passive” data collection, often without the user’s knowledge. His research also showed that Google utilizes data collected from other sources to de-anonymize existing user data.

Digital Content Next, the trade association of online publishers, surveyed a nationally representative sample to find out what people expect from Google — and, as with a similar study we conducted last year about Facebook, the results were unsettling.

Our findings show that many of Google’s data practices deviate from consumer expectations. We find it even more significant that consumer’s expectations are at an all-time low even after 2018, a year in which awareness around consumer privacy reached peak heights.

The results of the study are consistent with our Facebook study: People don’t want surveillance advertising. A majority of consumers indicated they don’t expect to be tracked across Google’s services, let alone be tracked across the web in order to make ads more targeted.

There was only one question where a small majority of respondents felt that Google was acting according to their expectations. That was about Google merging data from search queries with other data it collects on its own services. They also don’t expect Google to connect the data back to the user’s personal account, but only by a small majority. Google began doing both of these in 2016 after previously promising it wouldn’t.

Unearthed Emails Show Google, Ad Giants Know They Break Privacy Laws

Privacy warriors have filed fresh evidence in their ongoing battle against real-time web ad exchange systems, which campaigners claim trample over Europe’s data protection laws. The new filings — submitted today to regulators in the UK, Ireland, and Poland — allege that Google and industry body the Interactive Advertising Bureau (IAB) are well aware that their advertising networks flout the EU’s privacy-safeguarding GDPR, and yet are doing nothing about it. The IAB, Google — which is an IAB member — and others in the ad-slinging world insist they aren’t doing anything wrong. The fresh submissions come soon after the UK Information Commissioner’s Office (ICO) revealed plans to probe programmatic ads. These are adverts that are selected and served on-the-fly as you visit a webpage, using whatever personal information has been scraped together about you to pick an ad most relevant to your interests. […] The ICO’s investigation will focus on how well informed people are about how their personal information is used for this kind of online advertising, which laws ad-technology firms rely on for processing said private data, and whether users’ data is secure as it is shared on these platforms.

Facebook decided which users are interested in Nazis—and let advertisers target them directly

Facebook makes money by charging advertisers to reach just the right audience for their message–even when that audience is made up of people interested in the perpetrators of the Holocaust or explicitly neo-Nazi music. Despite promises of greater oversight following past advertising scandals, a Times review shows that Facebook has continued to allow advertisers to target hundreds of thousands of users the social media firm believes are curious about topics such as “Joseph Goebbels,” “Josef Mengele,” “Heinrich Himmler,” the neo-nazi punk band Skrewdriver and Benito Mussolini’s long-defunct National Fascist Party.

Experts say that this practice runs counter to the company’s stated principles and can help fuel radicalization online. “What you’re describing, where a clear hateful idea or narrative can be amplified to reach more people, is exactly what they said they don’t want to do and what they need to be held accountable for,” said Oren Segal, director of the Anti-Defamation League’s center on extremism. After being contacted by The Times, Facebook said that it would remove many of the audience groupings from its ad platform.

Facebook’s broad reach and sophisticated advertising tools brought in a record $55 billion in ad revenue in 2018. Profit margins stayed above 40%, thanks to a high degree of automation, with algorithms sorting users into marketable subsets based on their behavior — then choosing which ads to show them. But the lack of human oversight has also brought the company controversy.

In 2017, Pro Publica found that the company sold ads based on any user-generated phrase, including “Jew hater” and “Hitler did nothing wrong.” Following the murder of 11 congregants at a synagogue in Pittsburgh in 2018, the Intercept found that Facebook gave advertisers the ability to target users interested in the anti-Semitic “white genocide conspiracy theory,” which the suspected killer cited as inspiration before the attacks.

Most Facebook users don’t know that it records a list of their interests, new study finds

Seventy-four percent of Facebook users are unaware that Facebook records a list of their interests for ad-targeting purposes, according to a new study from the Pew Institute.

Participants in the study were first pointed to Facebook’s ad preferences page, which lists out a person’s interests. Nearly 60 percent of participants admitted that Facebook’s lists of interests were very or somewhat accurate to their actual interests, and 51 percent said they were uncomfortable with Facebook creating the list.

Facebook has weathered serious questions about its collection of personal information in recent years. CEO Mark Zuckerberg testified before Congress last year acknowledging privacy concerns and touching upon the company’s collection of personal information. While Zuckerberg said Facebook users have complete control over the information they upload and the information Facebook uses to actively target ads at its users, it’s clear from the Pew study that most people are not aware of Facebook’s collection tactics.

The Pew study also demonstrates that, while Facebook offers a number of transparency and data control tools, most users are not aware of where they should be looking. Even when the relevant information is located, there are often multiple steps to go through to delete assigned interests.

Several Popular Apps Share Data With Facebook Without User Consent

Some of the most popular apps for Android smartphones, including Skyscanner, TripAdvisor and MyFitnessPal, are transmitting data to Facebook without the consent of users in a potential breach of EU regulations.

In a study of 34 popular Android apps, the campaign group Privacy International found that at least 20 of them send certain data to Facebook the second that they are opened on a phone, before users can be asked for permission. Information sent instantly included the app’s name, the user’s unique ID with Google, and the number of times the app was opened and closed since being downloaded. Some, such as travel site Kayak, later sent detailed information about people’s flight searches to Facebook, including travel dates, whether the user had children and which flights and destinations they had searched for. European law on data-sharing changed in May with the introduction of General Data Protection Regulation and mobile apps are required to have the explicit consent of users before collecting their personal information.

As Facebook Raised a Privacy Wall, It Carved an Opening for Tech Giants

Internal documents show that the social network gave Microsoft, Amazon, Spotify and others far greater access to people’s data than it has disclosed.

For years, Facebook gave some of the world’s largest technology companies more intrusive access to users’ personal data than it has disclosed, effectively exempting those business partners from its usual privacy rules, according to internal records and interviews.

The special arrangements are detailed in hundreds of pages of Facebook documents obtained by The New York Times. The records, generated in 2017 by the company’s internal system for tracking partnerships, provide the most complete picture yet of the social network’s data-sharing practices. They also underscore how personal data has become the most prized commodity of the digital age, traded on a vast scale by some of the most powerful companies in Silicon Valley and beyond.

Facebook allowed Microsoft’s Bing search engine to see the names of virtually all Facebook users’ friends without consent, the records show, and gave Netflix and Spotify the ability to read Facebook users’ private messages.

The social network permitted Amazon to obtain users’ names and contact information through their friends, and it let Yahoo view streams of friends’ posts as recently as this summer, despite public statements that it had stopped that type of sharing years earlier.

Facebook has been reeling from a series of privacy scandals, set off by revelations in March that a political consulting firm, Cambridge Analytica, improperly used Facebook data to build tools that aided President Trump’s 2016 campaign. Acknowledging that it had breached users’ trust, Facebook insisted that it had instituted stricter privacy protections long ago. Mark Zuckerberg, the chief executive, assured lawmakers in April that people “have complete control” over everything they share on Facebook.

[Facebook’s strategy in times of crisis: delay, deny and deflect.]

Facebook began forming data partnerships when it was still a relatively young company. Mr. Zuckerberg was determined to weave Facebook’s services into other sites and platforms, believing it would stave off obsolescence and insulate Facebook from competition. Every corporate partner that integrated Facebook data into its online products helped drive the platform’s expansion, bringing in new users, spurring them to spend more time on Facebook and driving up advertising revenue. At the same time, Facebook got critical data back from its partners.

The partnerships were so important that decisions about forming them were vetted at high levels, sometimes by Mr. Zuckerberg and Sheryl Sandberg, the chief operating officer, Facebook officials said. While many of the partnerships were announced publicly, the details of the sharing arrangements typically were confidential.

Facebook also allowed Spotify, Netflix and the Royal Bank of Canada to read, write and delete users’ private messages, and to see all participants on a thread — privileges that appeared to go beyond what the companies needed to integrate Facebook into their systems, the records show. Facebook acknowledged that it did not consider any of those three companies to be service providers. Spokespeople for Spotify and Netflix said those companies were unaware of the broad powers Facebook had granted them. A spokesman for Netflix said Wednesday that it had used the access only to enable customers to recommend TV shows and movies to their friends.

A Royal Bank of Canada spokesman disputed that the bank had had any such access. (Aspects of some sharing partnerships, including those with the Royal Bank of Canada and Bing, were first reported by The Wall Street Journal.)

Spotify, which could view messages of more than 70 million users a month, still offers the option to share music through Facebook Messenger. But Netflix and the Canadian bank no longer needed access to messages because they had deactivated features that incorporated it.

These were not the only companies that had special access longer than they needed it. Yahoo, The Times and others could still get Facebook users’ personal information in 2017.

Yahoo could view real-time feeds of friends’ posts for a feature that the company had ended in 2012. A Yahoo spokesman declined to discuss the partnership in detail but said the company did not use the information for advertising. The Times — one of nine media companies named in the documents — had access to users’ friend lists for an article-sharing application it had discontinued in 2011. A spokeswoman for the news organization said it was not obtaining any data.

Facebook’s internal records also revealed more about the extent of sharing deals with over 60 makers of smartphones, tablets and other devices, agreements first reported by The Times in June.

Facebook empowered Apple to hide from Facebook users all indicators that its devices were asking for data. Apple devices also had access to the contact numbers and calendar entries of people who had changed their account settings to disable all sharing, the records show.

Apple officials said they were not aware that Facebook had granted its devices any special access. They added that any shared data remained on the devices and was not available to anyone other than the users.

YouTube’s Top-Earner For 2018 Is a 7-Year-Old

In 2018 the most-downloaded iPhone app was YouTube, reports USA Today, while Amazon’s best-selling item was their Fire TV Stick for streaming video. The No. 1 earner on YouTube this year is 7-year-old Ryan. For all those unboxing videos and playing with toys — and his own new line of toys at Walmart — he and his family will pull in a cool $22 million, according to Forbes. Ryan launched the channel in 2015 — when he was four — and now has 17.3 million followers.

NBCUniversal Taps Machine Learning to Tie Ads to Relevant Moments on TV

NBCUniversal announced a new machine learning tool today that helps brands place ads around scenes relevant to their product across any of the media giant’s broadcast and cable properties. The Contextual Intelligence Platform analyzes programming scripts, closed captioning data and visual descriptors of both ads and shows to find opportune moments for a given advertiser to appear as well as an emotional gauge for each scene determined by proprietary algorithms.

Focus groups for ads placed with the platform thus far have shown an average bump of 19 percent in brand memorability, 13 percent in likability and 64 percent in message memorability, according to Josh Feldman, vp and head of marketing and advertising creative, NBCU. The announcement comes as linear television providers continue to grapple with how to bring digital targeting practices to a medium that still largely operates on traditional phone-call media buying and manual ad placements. NBCU is now working with three to five advertisers for the system’s beta-test, and is aiming for an official release date early next year.

“Influencers” Are Being Paid Big Sums To Pitch Products and Thrash Rivals on Instagram and YouTube

“Influencers” are being paid big sums to pitch products on Instagram and YouTube. If you’re trying to grow a product on social media, you either fork over cash or pay in another way. This is the murky world of influencing, reports Wired. Brands will pay influencers to position products on their desks, behind them, or anywhere else they can subtly appear on screen. Payouts increase if an influencer tags a brand in a post or includes a link, but silent endorsements are often preferred.

Marketers of literature, wellness, fashion, entertainment, and other wares are all hooked on influencers. As brands have warmed to social-media advertising, influencer marketing has grown into a multibillion-dollar industry. Unlike traditional television or print ads, influencers have dedicated niche followings who take their word as gospel.

There’s another plus: Many users don’t view influencers as paid endorsers or salespeople—even though a significant percentage are—but as trusted experts, friends, and “real” people. This perceived authenticity is part of why brands shell out so much cash in exchange for a brief appearance in your Instagram feed.

Google Is Teaching Children How to Act Online. Is It the Best Role Model?

Google is on a mission to teach children how to be safe online. That is the message behind “Be Internet Awesome,” a so-called digital-citizenship education program that the technology giant developed for schools.

The lessons include a cartoon game branded with Google’s logo and blue, red, yellow and green color palette. The game is meant to help students from third grade through sixth guard against schemers, hackers and other bad actors.

But critics say the company’s recent woes — including revelations that it was developing a censored version of its search engine for the Chinese market and had tracked the whereabouts of users who had explicitly turned off their location history — should disqualify Google from promoting itself in schools as a model of proper digital conduct.

Among other things, these critics argue, the company’s lessons give children the mistaken impression that the main threat they face online is from malicious hackers and bullies, glossing over the privacy concerns that arise when tech giants like Google itself collect users’ personal information and track their actions online.

American corporate giants are no strangers to the country’s schools.

In the 1970s, General Motors circulated a free booklet in public schools that featured cartoon characters like Harry Hydrocarbon, who played down concerns about the health risks of industrial pollution and suggested that air pollution would soon not be a problem, according to a 1979 report, “Hucksters in the Classroom: A Review of Industry Propaganda in Schools.”

In the 1990s, Procter & Gamble promoted its own curriculum, “Decision: Earth,” in schools. Among other things, it instructed children that synthetic diapers were no more harmful for the environment than cloth diapers.

Around the same time, Campbell Soup sponsored a classroom kit called the “Prego Thickness Experiment.” According to a 1997 article in The New York Times, “Corporate Classrooms and Commercialism,” the kit was supposed to teach children the scientific method — by having them “prove” that Prego pasta sauce was thicker than rival Ragu.

Critics see a similar self-serving agenda with “Be Internet Awesome,” which presents malicious third parties as the primary online threat to children, while failing to teach them how to navigate corporate data-mining practices.

Now Apps Can Track You Even After You Uninstall Them

If it seems as though the app you deleted last week is suddenly popping up everywhere, it may not be mere coincidence. Companies that cater to app makers have found ways to game both iOS and Android, enabling them to figure out which users have uninstalled a given piece of software lately—and making it easy to pelt the departed with ads aimed at winning them back.

Adjust, AppsFlyer, MoEngage, Localytics, and CleverTap are among the companies that offer uninstall trackers, usually as part of a broader set of developer tools. Their customers include T-Mobile US, Spotify Technology, and Yelp. (And Bloomberg Businessweek parent Bloomberg LP, which uses Localytics.) Critics say they’re a fresh reason to reassess online privacy rights and limit what companies can do with user data.

Uninstall tracking exploits a core element of Apple Inc.’s and Google’s mobile operating systems: push notifications. Developers have always been able to use so-called silent push notifications to ping installed apps at regular intervals without alerting the user—to refresh an inbox or social media feed while the app is running in the background, for example. But if the app doesn’t ping the developer back, the app is logged as uninstalled, and the uninstall tracking tools add those changes to the file associated with the given mobile device’s unique advertising ID, details that make it easy to identify just who’s holding the phone and advertise the app to them wherever they go.

At its best, uninstall tracking can be used to fix bugs or otherwise refine apps without having to bother users with surveys or more intrusive tools. But the ability to abuse the system beyond its original intent exemplifies the bind that accompanies the modern internet, says Gillula. To participate, users must typically agree to share their data freely, probably forever, not knowing exactly how it may be used down the road. “As an app developer, I would expect to be able to know how many people have uninstalled an app,” he says. “I would not say that, as an app developer, you have a right to know exactly who installed and uninstalled your app.”

Facebook Could Use Data Collected From Its Portal In-Home Video Device To Target You With Ads

Facebook announced Portal last week, its take on the in-home, voice-activated speaker to rival competitors from Amazon, Google and Apple. Last Monday, we wrote: “No data collected through Portal — even call log data or app usage data, like the fact that you listened to Spotify — will be used to target users with ads on Facebook.” We wrote that because that’s what we were told by Facebook executives. But Facebook has since reached out to change its answer: Portal doesn’t have ads, but data about who you call and data about which apps you use on Portal can be used to target you with ads on other Facebook-owned properties.

“Portal voice calling is built on the Messenger infrastructure, so when you make a video call on Portal, we collect the same types of information (i.e. usage data such as length of calls, frequency of calls) that we collect on other Messenger-enabled devices. We may use this information to inform the ads we show you across our platforms. Other general usage data, such as aggregate usage of apps, etc., may also feed into the information that we use to serve ads,” a spokesperson said in an email to Recode. That isn’t very surprising, considering Facebook’s business model. The biggest benefit of Facebook owning a device in your home is that it provides the company with another data stream for its ad-targeting business.

Mobile Websites Can Tap Into Your Phone’s Sensors Without Asking

When apps wants to access data from your smartphone’s motion or light sensors, they often make that capability clear. That keeps a fitness app, say, from counting your steps without your knowledge. But a team of researchers has discovered that the rules don’t apply to websites loaded in mobile browsers, which can often often access an array of device sensors without any notifications or permissions whatsoever.

That mobile browsers offer developers access to sensors isn’t necessarily problematic on its own. It’s what helps those services automatically adjust their layout, for example, when you switch your phone’s orientation. And the World Wide Web Consortium standards body has codified how web applications can access sensor data. But the researchers—Anupam Das of North Carolina State University, Gunes Acar of Princeton University, Nikita Borisov of the University of Illinois at Urbana-Champaign, and Amogh Pradeep of Northeastern University—found that the standards allow for unfettered access to certain sensors. And sites are using it.

The researchers found that of the top 100,000 sites—as ranked by Amazon-owned analytics company Alexa—3,695 incorporate scripts that tap into one or more of these accessible mobile sensors. That includes plenty of big names, including Wayfair, Priceline.com, and Kayak.

“If you use Google Maps in a mobile browser you’ll get a little popup that says, ‘This website wants to see your location,’ and you can authorize that,” says Borisov. “But with motion, lighting, and proximity sensors there isn’t any mechanism to notify the user and ask for permission, so they’re being accessed and that is invisible to the user. For this collection of sensors there isn’t a permissions infrastructure.”

That unapproved access to motion, orientation, proximity, or light sensor data alone probably wouldn’t compromise a user’s identity or device. And a web page can only access sensors as long as a user is actively browsing the page, not in the background. But the researchers note that on a malicious website, the information could fuel various types of attacks, like using ambient light data to make inferences about a user’s browsing, or using motion sensor data as a sort of keylogger to deduce things like PIN numbers.

In past work, researchers have also shown that they can use the unique calibration features of motion sensors on individual devices to identify and track them across websites. And while the World Wide Web Consortium standards classify data from these sensors as “not sensitive enough to warrant specific sensor permission grants,” the group does acknowledge that there are some potential privacy concerns. “Implementations may consider permissions or visual indicators to signify the use of sensors by the page,” the standard suggests.

The prevalence of ad networks also makes it difficult to get a handle on the issue. The researchers even found three scripts attempting to access user sensors in ad modules on WIRED.com, though at least one had been removed when the researchers rechecked the site for this story. Other media sites, including CNN, the Los Angeles Times, and CNET have ad networks using similar scripts as well.

Facebook Is Giving Advertisers Access To Your Shadow Contact Information

Kashmir Hill, reporting for Gizmodo:

Last week, I ran an ad on Facebook targeted at a computer science professor named Alan Mislove. Mislove studies how privacy works on social networks and had a theory that Facebook is letting advertisers reach users with contact information collected in surprising ways. I was helping him test the theory by targeting him in a way Facebook had previously told me wouldn’t work. I directed the ad to display to a Facebook account connected to the landline number for Alan Mislove’s office, a number Mislove has never provided to Facebook. He saw the ad within hours.

One of the many ways that ads get in front of your eyeballs on Facebook and Instagram is that the social networking giant lets an advertiser upload a list of phone numbers or email addresses it has on file; it will then put an ad in front of accounts associated with that contact information. A clothing retailer can put an ad for a dress in the Instagram feeds of women who have purchased from them before, a politician can place Facebook ads in front of anyone on his mailing list, or a casino can offer deals to the email addresses of people suspected of having a gambling addiction. Facebook calls this a “custom audience.” You might assume that you could go to your Facebook profile and look at your “contact and basic info” page to see what email addresses and phone numbers are associated with your account, and thus what advertisers can use to target you. But as is so often the case with this highly efficient data-miner posing as a way to keep in contact with your friends, it’s going about it in a less transparent and more invasive way.

… Giridhari Venkatadri, Piotr Sapiezynski, and Alan Mislove of Northeastern University, along with Elena Lucherini of Princeton University, did a series of tests that involved handing contact information over to Facebook for a group of test accounts in different ways and then seeing whether that information could be used by an advertiser. They came up with a novel way to detect whether that information became available to advertisers by looking at the stats provided by Facebook about the size of an audience after contact information is uploaded. They go into this in greater length and technical detail in their paper [PDF]. They found that when a user gives Facebook a phone number for two-factor authentication or in order to receive alerts about new log-ins to a user’s account, that phone number became targetable by an advertiser within a couple of weeks.

Officially, Facebook denies the existence of shadow profiles. In a hearing with the House Energy & Commerce Committee earlier this year, when New Mexico Representative Ben Lujan asked Facebook CEO Mark Zuckerberg if he was aware of the so-called practice of building “shadow profiles”, Zuckerberg denied knowledge of it.

Phone and internet use: Calls drop, screen time increases

Two in five adults look at their phone within five minutes of waking, while a third check their phones just before falling asleep, according to Ofcom.

A high percentage (71%) say they never turn off their phones and 78% say they could not live without it.

The average daily time spent on a smartphone is two hours 28 minutes, rising to three hours 14 minutes for 18 to 24-year-olds, the report indicates.

Most people expect a constant internet connection, with the majority of adults saying the internet is an essential part of their lives, and one in five spending more than 40 hours a week online.

Our obsession with our phones is good news for advertisers. Nearly a quarter of all advertising spend is now on mobiles, and if mobile advertising was stripped away, ad revenue would be in decline for the first time.

Other findings from the report include:

  • 42% of houses now own a smart TV – with Ofcom predicting this will rise substantially in coming years
  • the average household spends £124 on communication services each month
  • 40% of households subscribe to Netflix
  • one in eight homes now has a smart speaker

Digital ads are starting to feel psychic

It seems like everyone these days has had a paranoiac moment where a website advertises something to you that you recently purchased or was gifted without a digital trail. According to a new website called New Organs, which collects first-hand accounts of these moments, “the feeling of being listened to is among the most common experiences, along with seeing the same ads on different websites, and being tracked via geo-location,” reports The Outline. The website was created by Tega Brain and Sam Lavigne, two Brooklyn-based artists whose work explores the intersections of technology and society…

How Smart TVs in Millions of US Homes Track More Than What’s on Tonight

The growing concern over online data and user privacy has been focused on tech giants like Facebook and devices like smartphones. But people’s data is also increasingly being vacuumed right out of their living rooms via their televisions, sometimes without their knowledge. From a report:

In recent years, data companies have harnessed new technology to immediately identify what people are watching on internet-connected TVs, then using that information to send targeted advertisements to other devices in their homes. Marketers, forever hungry to get their products in front of the people most likely to buy them, have eagerly embraced such practices. But the companies watching what people watch have also faced scrutiny from regulators and privacy advocates over how transparent they are being with users.

Samba TV is one of the bigger companies that track viewer information to make personalized show recommendations. The company said it collected viewing data from 13.5 million smart TVs in the United States, and it has raised $40 million in venture funding from investors including Time Warner, the cable operator Liberty Global and the billionaire Mark Cuban. Samba TV has struck deals with roughly a dozen TV brands — including Sony, Sharp, TCL and Philips — to place its software on certain sets. When people set up their TVs, a screen urges them to enable a service called Samba Interactive TV, saying it recommends shows and provides special offers “by cleverly recognizing onscreen content.” But the screen, which contains the enable button, does not detail how much information Samba TV collects to make those recommendations…. Once enabled, Samba TV can track nearly everything that appears on the TV on a second-by-second basis, essentially reading pixels to identify network shows and ads, as well as programs on Netflix and HBO and even video games played on the TV.

Facebook, Google, and Microsoft Use Design to Trick You Into Handing Over Your Data, New Report Warns

A study from the Norwegian Consumer Council dug into the underhanded tactics used by Microsoft, Facebook, and Google to collect user data. “The findings include privacy intrusive default settings, misleading wording, giving users an illusion of control, hiding away privacy-friendly choices, take-it-or-leave-it choices, and choice architectures where choosing the privacy friendly option requires more effort for the users,” states the report, which includes images and examples of confusing design choices and strangely worded statements involving the collection and use of personal data.

Google makes opting out of personalized ads more of a chore than it needs to be and uses multiple pages of text, unclear design language, and, as described by the report, “hidden defaults” to push users toward the company’s desired action. “If the user tried to turn the setting off, a popup window appeared explaining what happens if Ads Personalization is turned off, and asked users to reaffirm their choice,” the report explained. “There was no explanation about the possible benefits of turning off Ads Personalization, or negative sides of leaving it turned on.” Those who wish to completely avoid personalized ads must traverse multiple menus, making that “I agree” option seem like the lesser of two evils.

In Windows 10, if a user wants to opt out of “tailored experiences with diagnostic data,” they have to click a dimmed lightbulb, while the symbol for opting in is a brightly shining bulb, says the report.

Another example has to do with Facebook. The social media site makes the “Agree and continue” option much more appealing and less intimidating than the grey “Manage Data Settings” option. The report says the company-suggested option is the easiest to use. “This ‘easy road’ consisted of four clicks to get through the process, which entailed accepting personalized ads from third parties and the use of face recognition. In contrast, users who wanted to limit data collection and use had to go through 13 clicks.”

Targeted advertising hits emergency rooms

Patients sitting in emergency rooms, at chiropractors’ offices and at pain clinics in the Philadelphia area may start noticing on their phones the kind of messages typically seen along highway billboards and public transit: personal injury law firms looking for business by casting mobile online ads at patients.

The potentially creepy part? They’re only getting fed the ad because somebody knows they are in an emergency room.

The technology behind the ads, known as geofencing, or placing a digital perimeter around a specific location, has been deployed by retailers for years to offer coupons and special offers to customers as they shop. Bringing it into health care spaces, however, is raising alarm among privacy experts.