Resources

Do Google and Facebook Threaten Our ‘Ambient Privacy’?

Until recently, ambient privacy was a simple fact of life. Recording something for posterity required making special arrangements, and most of our shared experience of the past was filtered through the attenuating haze of human memory. Even police states like East Germany, where one in seven citizens was an informer, were not able to keep tabs on their entire population. Today computers have given us that power. Authoritarian states like China and Saudi Arabia are using this newfound capacity as a tool of social control. Here in the United States, we’re using it to show ads. But the infrastructure of total surveillance is everywhere the same, and everywhere being deployed at scale….

Because our laws frame privacy as an individual right, we don’t have a mechanism for deciding whether we want to live in a surveillance society. Congress has remained silent on the matter, with both parties content to watch Silicon Valley make up its own rules. The large tech companies point to our willing use of their services as proof that people don’t really care about their privacy. But this is like arguing that inmates are happy to be in jail because they use the prison library. Confronted with the reality of a monitored world, people make the rational decision to make the best of it.

That is not consent…

Our discourse around privacy needs to expand to address foundational questions about the role of automation: To what extent is living in a surveillance-saturated world compatible with pluralism and democracy? What are the consequences of raising a generation of children whose every action feeds into a corporate database? What does it mean to be manipulated from an early age by machine learning algorithms that adaptively learn to shape our behavior? That is not the conversation Facebook or Google want us to have. Their totalizing vision is of a world with no ambient privacy and strong data protections, dominated by the few companies that can manage to hoard information at a planetary scale. They correctly see the new round of privacy laws as a weapon to deploy against smaller rivals, further consolidating their control over the algorithmic panopticon.

Facebook says there’s “No Expectation of Privacy On Social Media”

Facebook wants to dismiss a lawsuit stemming from the Cambridge Analytica scandal by arguing that it didn’t violate users’ privacy rights because there’s no expectation of privacy when using social media. “There is no invasion of privacy at all, because there is no privacy,” Facebook counsel Orin Snyder said during a pretrial hearing to dismiss a lawsuit stemming from the Cambridge Analytica scandal, according to Law 360. The company reportedly didn’t deny that third parties accessed users’ data, but it instead told U.S. District Judge Vince Chhabria that there’s no “reasonable expectation of privacy” on Facebook or any other social media site.

Thanks To Facebook, Your Cellphone Company Is Watching You More Closely Than Ever

A confidential Facebook document reviewed by The Intercept shows that Facebook courts carriers, along with phone makers — some 100 different companies in 50 countries — by offering the use of even more surveillance data, pulled straight from your smartphone by Facebook itself.

Offered to select Facebook partners, the data includes not just technical information about Facebook members’ devices and use of Wi-Fi and cellular networks, but also their past locations, interests, and even their social groups. This data is sourced not just from the company’s main iOS and Android apps, but from Instagram and Messenger as well. The data has been used by Facebook partners to assess their standing against competitors, including customers lost to and won from them, but also for more controversial uses like racially targeted ads.

Some experts are particularly alarmed that Facebook has marketed the use of the information — and appears to have helped directly facilitate its use, along with other Facebook data — for the purpose of screening customers on the basis of likely creditworthiness. Such use could potentially run afoul of federal law, which tightly governs credit assessments. Facebook said it does not provide creditworthiness services and that the data it provides to cellphone carriers and makers does not go beyond what it was already collecting for other uses.

Facebook Contractors Categorize Your Private Posts To Train AI

Facebook uses thousands of third-party staffers around the world to look at Facebook and Instagram posts to help train its AI and to inform new products. “But because the contractors see users’ public and private posts, some view it as a violation of privacy.”

According to Reuters, as many as 260 contract workers in Hyderabad, India have spent more than a year labeling millions of Facebook posts dating back to 2014. They look for the subject of the post, the occasion and the author’s intent, and Facebook told Reuters, the company uses that information to develop new features and to potentially increase usage and ad revenue. Around the globe, Facebook has as many as 200 similar content labeling projects, many of which are used to train the company’s AI.

The contractors working in Hyderabad told Reuters they see everything from text-based status updates to videos, photos and Stories across Facebook and Instagram — including those that are shared privately. And even as Facebook embarks on its “the future is private” platform, one Facebook employee told Reuters he can’t imagine the practice going away. It’s a core part of training AI and developing the company’s products.

Facebook is Demanding Some Users Share the Password For Their Outside Email Account

Just two weeks after admitting it stored hundreds of millions of its users’ own passwords insecurely, Facebook is demanding some users fork over the password for their outside email account as the price of admission to the social network. Facebook users are being interrupted by an interstitial demanding they provide the password for the email account they gave to Facebook when signing up. “To continue using Facebook, you’ll need to confirm your email,” the message demands. “Since you signed up with [email address], you can do that automatically …” A form below the message asked for the users’ “email password.”

“That’s beyond sketchy,” security consultant Jake Williams told the Daily Beast. “They should not be taking your password or handling your password in the background. If that’s what’s required to sign up with Facebook, you’re better off not being on Facebook.” In a statement emailed to the Daily Beast after this story published, Facebook reiterated its claim it doesn’t store the email passwords. But the company also announced it will end the practice altogether. “We understand the password verification option isn’t the best way to go about this, so we are going to stop offering it,” Facebook wrote. It’s not clear how widely the new measure was deployed, but in its statement Facebook said users retain the option of bypassing the password demand and activating their account through more conventional means, such as “a code sent to their phone or a link sent to their email.” Those options are presented to users who click on the words “Need help?” in one corner of the page.

Facebook Ad Platform Could Be Inherently Discriminatory, Researchers Say

Researchers from Northeastern Unviersity, the University of Southern Carolina, and tech accountability non-profit Upturn have released a paper that says Facebook’s ad delivery system itself can steer ads intended to be inclusive toward discrimination without explicit intent. “In a paper titled, ‘Discrimination through optimization: How Facebook’s ad delivery can lead to skewed outcomes,’ co-authors Muhammad Ali, Piotr Sapiezynski, Miranda Bogen, Aleksandra Korolova, Alan Mislove, and Aaron Rieke find that advertiser budgets and ad content affect ad delivery, skewing it along gender and racial lines even when neutral ad targeting settings are used,” reports The Register. From the report:
The researchers found that Facebook ads tend to be shown to men because women tend to click on ads more often, making them more expensive to reach through Facebook’s system. That divide becomes apparent when ad budgets are compared, because the ad budget affects ad distribution. As the paper explains, “the higher the daily budget, the smaller the fraction of men in the audience.” Such segregation may be appropriate and desirable for certain types of marketing pitches, but when applied to credit, employment and housing ads, the consequences can be problematic.

Ad content — text and images — also has a strong effect on whether ads get shown to men or women, even when the bidding strategy is the same and gender-agnostic targeting is used. In particular, the researchers found images had a surprisingly large effect on ad delivery. Ad URL destination has some effect — an ad pointing to a bodybuilding site and an ad pointing to a cosmetics site had a baseline delivery distribution of 48 percent men and 40 percent men respectively. The addition of a title and headline doesn’t change that much. But once the researchers added an image to the ad, the distribution pattern changed, with the bodybuilding site ad reaching an audience that was 75 percent male and the cosmetics ad reaching an audience that was 90 percent female. According to the researchers, their tests suggest, “Facebook has an automated image classification mechanism in place that is used to steer different ads towards different subsets of the user population.”

“In terms of credit, employment and housing ads, the problem with this system is that it discriminates where it shouldn’t: Five ads for lumber industry jobs were delivered to an audience that was more than 90 percent men and more than 70 percent white; five ads for janitorial work were delivered to an audience that was more than 65 percent women and 75 percent black,” the report adds. “Housing ads also showed a racial skew.”

The latest findings come after years of criticism of Facebook’s ad system. Last month, Facebook announced changes to the platform intended to prevent advertisers from deploying unfair credit, employment and housing ads. One week later, the U.S. Department of Housing and Urban Development sued Facebook for violating the Fair Housing Act.

Mark Zuckerberg Leveraged Facebook User Data To Fight Rivals and Help Friends, Leaked Documents Show

Facebook CEO Mark Zuckerberg once considered making deals with third-party developers just to help him find out how much users’ data is worth, NBC News reported on Tuesday. The report, which cites 4,000 leaked pages of internal documents, shines a light on the way senior company executives viewed attaching a dollar sign to sensitive user data, despite Facebook’s public commitment to protect such information. From the report:

In the same week, Zuckerberg floated the idea of pursuing 100 deals with developers “as a path to figuring out the real market value” of Facebook user data and then “setting a public rate” for developers. “The goal here wouldn’t be the deals themselves, but that through the process of negotiating with them we’d learn what developers would actually pay (which might be different from what they’d say if we just asked them about the value), and then we’d be better informed on our path to set a public rate,” Zuckerberg wrote in a chat. Facebook told NBC News that it was exploring ways to build a sustainable business, but ultimately decided not to go forward with these plans.

Zuckerberg was unfazed by the potential privacy risks associated with Facebook’s data-sharing arrangements. “I’m generally skeptical that there is as much data leak strategic risk as you think,” he wrote in the email to Lessin. “I think we leak info to developers but I just can’t think of any instances where that data has leaked from developer to developer and caused a real issue for us.”

The report also outlines how PR people at Facebook tries to spin things. An excerpt: In a March 2014 email discussing Zuckerberg’s keynote speech at the event, where he was due to announce the removal of developers’ access to friends’ data, Jonny Thaw, a director of communications, wrote that it “may be a tough message for some developers as it may inhibit their growth.” “So one idea that came up today was potentially talking in the keynote about some of the trust changes we’re making on Facebook itself. So the message would be: ‘trust is really important to us — on Facebook, we’re doing A, B and C to help people control and understand what they’re sharing — and with platform apps we’re doing D, E and F.'” If that doesn’t work, he added, “we could announce some of Facebook’s trust initiatives in the run up to F8” to make the changes for developers “seem more natural.”

Facebook Should Notify Users Who Interact With Fake Police ‘Sock Puppet’ Accounts

Despite Facebook’s repeated warnings that law enforcement is required to use “authentic identities” on the social media platform, cops continue to create fake and impersonator accounts to secretly spy on users. By pretending to be someone else, cops are able to sneak past the privacy walls users put up and bypass legal requirements that might require a warrant to obtain that same information.

EFF is now calling on Facebook to escalate the matter with law enforcement in the United States. Facebook should take the following actions to address the proliferation of fake/impersonator Facebook accounts operated by law enforcement, in addition to suspending the fake accounts. As part of its regular transparency reports, Facebook should publish data on the number of fake/impersonator law enforcement accounts identified, what agencies they belonged to, and what action was taken. When a fake/impersonator account is identified, Facebook should alert the users and groups that interacted with the account whether directly or indirectly.

The article also suggests updating Facebook’s Terms of Service to explicitly prohibit fake/impersonator profiles by law enforcement groups, and updating Facebook pages of law enforcement groups to inform visitors when those groups have a written policy allowing fake/impersonator law enforcement accounts. “These four changes are relatively light lifts that would enhance transparency and establish real consequences for agencies that deliberately violate the rules…”

“Facebook’s practice of taking down these individual accounts when they learn about them from the press (or from EFF) is insufficient to deter what we believe is a much larger iceberg beneath the surface.”

‘Highly confidential’ documents reveal Facebook used VPN app to track competitors

Internal documents, made public as part of a cache of documents released by UK [Select Committee investigating Facebook], show just how close an eye the social network was keeping on competitors like WhatsApp and Snapchat, both of which became acquisition targets.

The documents, which are labeled “highly confidential,” show slides from an internal presentation in 2013 that compares Facebook’s reach to competing apps, including WhatsApp and Snapchat.

Facebook’s presentation relied on data from Onavo, the virtual private network (VPN) service which Facebook also acquired several months later. Facebook’s use of Onavo, which has been likened to “corporate spyware,” has itself been controversial.

The company was forced to remove Onavo from Apple’s App Store earlier this year after Apple changed its developer guidelines to prohibit apps from collecting data about which other services are installed on its users’ phones.

Facebook’s global lobbying against data privacy laws

Facebook has targeted politicians around the world – including the former UK chancellor, George Osborne – promising investments and incentives while seeking to pressure them into lobbying on Facebook’s behalf against data privacy legislation, an explosive new leak of internal Facebook documents has revealed.

The documents, which have been seen by the Observer and Computer Weekly, reveal a secretive global lobbying operation targeting hundreds of legislators and regulators in an attempt to procure influence across the world, including in the UK, US, Canada, India, Vietnam, Argentina, Brazil, Malaysia and all 28 states of the EU. The documents include details of how Facebook:

– Lobbied politicians across Europe in a strategic operation to head off “overly restrictive” GDPR legislation. They include extraordinary claims that the Irish prime minister said his country could exercise significant influence as president of the EU, promoting Facebook’s interests even though technically it was supposed to remain neutral.

– Used chief operating officer Sheryl Sandberg’s feminist memoir Lean In to “bond” with female European commissioners it viewed as hostile.

– Threatened to withhold investment from countries unless they supported or passed Facebook-friendly laws.

The documents appear to emanate from a court case against Facebook by the app developer Six4Three in California, and reveal that Sandberg considered European data protection legislation a “critical” threat to the company. A memo written after the Davos economic summit in 2013 quotes Sandberg describing the “uphill battle” the company faced in Europe on the “data and privacy front” and its “critical” efforts to head off “overly prescriptive new laws”.

Most revealingly, it includes details of the company’s “great relationship” with Enda Kenny, the Irish prime minister at the time, one of a number of people it describes as “friends of Facebook”. Ireland plays a key role in regulating technology companies in Europe because its data protection commissioner acts for all 28 member states. The memo has inflamed data protection advocates, who have long complained about the company’s “cosy” relationship with the Irish government.

Facebook admits 18% of Research spyware users were teens, not <5%

Facebook has changed its story after initially trying to downplay how it targeted teens with its Research program that a TechCrunch investigation revealed was paying them gift cards to monitor all their mobile app usage and browser traffic. “Less than 5 percent of the people who chose to participate in this market research program were teens” a Facebook spokesperson told TechCrunch and many other news outlets in a damage control effort 7 hours after we published our report on January 29th. At the time, Facebook claimed that it had removed its Research app from iOS. The next morning we learned that wasn’t true, as Apple had already forcibly blocked the Facebook Research app for violating its Enterprise Certificate program that supposed to reserved for companies distributing internal apps to employees.

It turns out that wasn’t the only time Facebook deceived the public in its response regarding the Research VPN scandal. TechCrunch has obtained Facebook’s unpublished February 21st response to questions about the Research program in a letter from Senator Mark Warner, who wrote to CEO Mark Zuckerberg that “Facebook’s apparent lack of full transparency with users – particularly in the context of ‘research’ efforts – has been a source of frustration for me.”

In the response from Facebook’s VP of US public policy Kevin Martin, the company admits that (emphasis ours) “At the time we ended the Facebook Research App on Apple’s iOS platform, less than 5 percent of the people sharing data with us through this program were teens. Analysis shows that number is about 18 percent when you look at the complete lifetime of the program, and also add people who had become inactive and uninstalled the app.” So 18 percent of research testers were teens. It was only less than 5 percent when Facebook got caught. Given users age 13 to 35 were eligible for Facebook’s Research program, 13 to 18 year olds made of 22 percent of the age range. That means Facebook clearly wasn’t trying to minimize teen involvement, nor were they just a tiny fraction of users.

Facebook decided which users are interested in Nazis—and let advertisers target them directly

Facebook makes money by charging advertisers to reach just the right audience for their message–even when that audience is made up of people interested in the perpetrators of the Holocaust or explicitly neo-Nazi music. Despite promises of greater oversight following past advertising scandals, a Times review shows that Facebook has continued to allow advertisers to target hundreds of thousands of users the social media firm believes are curious about topics such as “Joseph Goebbels,” “Josef Mengele,” “Heinrich Himmler,” the neo-nazi punk band Skrewdriver and Benito Mussolini’s long-defunct National Fascist Party.

Experts say that this practice runs counter to the company’s stated principles and can help fuel radicalization online. “What you’re describing, where a clear hateful idea or narrative can be amplified to reach more people, is exactly what they said they don’t want to do and what they need to be held accountable for,” said Oren Segal, director of the Anti-Defamation League’s center on extremism. After being contacted by The Times, Facebook said that it would remove many of the audience groupings from its ad platform.

Facebook’s broad reach and sophisticated advertising tools brought in a record $55 billion in ad revenue in 2018. Profit margins stayed above 40%, thanks to a high degree of automation, with algorithms sorting users into marketable subsets based on their behavior — then choosing which ads to show them. But the lack of human oversight has also brought the company controversy.

In 2017, Pro Publica found that the company sold ads based on any user-generated phrase, including “Jew hater” and “Hitler did nothing wrong.” Following the murder of 11 congregants at a synagogue in Pittsburgh in 2018, the Intercept found that Facebook gave advertisers the ability to target users interested in the anti-Semitic “white genocide conspiracy theory,” which the suspected killer cited as inspiration before the attacks.

Facebook Pays Teens To Install VPN That Spies On Them

Since 2016, Facebook has been paying users ages 13 to 35 up to $20 per month plus referral fees to sell their privacy by installing the iOS or Android “Facebook Research” app. Facebook even asked users to screenshot their Amazon order history page. The program is administered through beta testing services Applause, BetaBound and uTest to cloak Facebook’s involvement, and is referred to in some documentation as “Project Atlas” a fitting name for Facebook’s effort to map new trends and rivals around the globe.

We asked Guardian Mobile Firewall’s security expert Will Strafach to dig into the Facebook Research app, and he told us that “If Facebook makes full use of the level of access they are given by asking users to install the Certificate, they will have the ability to continuously collect the following types of data: private messages in social media apps, chats from in instant messaging apps — including photos/videos sent to others, emails, web searches, web browsing activity, and even ongoing location information by tapping into the feeds of any location tracking apps you may have installed.” It’s unclear exactly what data Facebook is concerned with, but it gets nearly limitless access to a user’s device once they install the app.

‘The goal is to automate us’: welcome to the age of surveillance capitalism

The behaviour of the digital giants looks rather different from the roseate hallucinations of Wired magazine. What one sees instead is a colonising ruthlessness of which John D Rockefeller would have been proud. First of all there was the arrogant appropriation of users’ behavioural data – viewed as a free resource, there for the taking. Then the use of patented methods to extract or infer data even when users had explicitly denied permission, followed by the use of technologies that were opaque by design and fostered user ignorance.

And, of course, there is also the fact that the entire project was conducted in what was effectively lawless – or at any rate law-free – territory. Thus Google decided that it would digitise and store every book ever printed, regardless of copyright issues. Or that it would photograph every street and house on the planet without asking anyone’s permission. Facebook launched its infamous “beacons”, which reported a user’s online activities and published them to others’ news feeds without the knowledge of the user. And so on, in accordance with the disrupter’s mantra that “it is easier to ask for forgiveness than for permission”.

The combination of state surveillance and its capitalist counterpart means that digital technology is separating the citizens in all societies into two groups: the watchers (invisible, unknown and unaccountable) and the watched. This has profound consequences for democracy because asymmetry of knowledge translates into asymmetries of power.

Most Facebook users don’t know that it records a list of their interests, new study finds

Seventy-four percent of Facebook users are unaware that Facebook records a list of their interests for ad-targeting purposes, according to a new study from the Pew Institute.

Participants in the study were first pointed to Facebook’s ad preferences page, which lists out a person’s interests. Nearly 60 percent of participants admitted that Facebook’s lists of interests were very or somewhat accurate to their actual interests, and 51 percent said they were uncomfortable with Facebook creating the list.

Facebook has weathered serious questions about its collection of personal information in recent years. CEO Mark Zuckerberg testified before Congress last year acknowledging privacy concerns and touching upon the company’s collection of personal information. While Zuckerberg said Facebook users have complete control over the information they upload and the information Facebook uses to actively target ads at its users, it’s clear from the Pew study that most people are not aware of Facebook’s collection tactics.

The Pew study also demonstrates that, while Facebook offers a number of transparency and data control tools, most users are not aware of where they should be looking. Even when the relevant information is located, there are often multiple steps to go through to delete assigned interests.

Several Popular Apps Share Data With Facebook Without User Consent

Some of the most popular apps for Android smartphones, including Skyscanner, TripAdvisor and MyFitnessPal, are transmitting data to Facebook without the consent of users in a potential breach of EU regulations.

In a study of 34 popular Android apps, the campaign group Privacy International found that at least 20 of them send certain data to Facebook the second that they are opened on a phone, before users can be asked for permission. Information sent instantly included the app’s name, the user’s unique ID with Google, and the number of times the app was opened and closed since being downloaded. Some, such as travel site Kayak, later sent detailed information about people’s flight searches to Facebook, including travel dates, whether the user had children and which flights and destinations they had searched for. European law on data-sharing changed in May with the introduction of General Data Protection Regulation and mobile apps are required to have the explicit consent of users before collecting their personal information.

As Facebook Raised a Privacy Wall, It Carved an Opening for Tech Giants

Internal documents show that the social network gave Microsoft, Amazon, Spotify and others far greater access to people’s data than it has disclosed.

For years, Facebook gave some of the world’s largest technology companies more intrusive access to users’ personal data than it has disclosed, effectively exempting those business partners from its usual privacy rules, according to internal records and interviews.

The special arrangements are detailed in hundreds of pages of Facebook documents obtained by The New York Times. The records, generated in 2017 by the company’s internal system for tracking partnerships, provide the most complete picture yet of the social network’s data-sharing practices. They also underscore how personal data has become the most prized commodity of the digital age, traded on a vast scale by some of the most powerful companies in Silicon Valley and beyond.

Facebook allowed Microsoft’s Bing search engine to see the names of virtually all Facebook users’ friends without consent, the records show, and gave Netflix and Spotify the ability to read Facebook users’ private messages.

The social network permitted Amazon to obtain users’ names and contact information through their friends, and it let Yahoo view streams of friends’ posts as recently as this summer, despite public statements that it had stopped that type of sharing years earlier.

Facebook has been reeling from a series of privacy scandals, set off by revelations in March that a political consulting firm, Cambridge Analytica, improperly used Facebook data to build tools that aided President Trump’s 2016 campaign. Acknowledging that it had breached users’ trust, Facebook insisted that it had instituted stricter privacy protections long ago. Mark Zuckerberg, the chief executive, assured lawmakers in April that people “have complete control” over everything they share on Facebook.

[Facebook’s strategy in times of crisis: delay, deny and deflect.]

Facebook began forming data partnerships when it was still a relatively young company. Mr. Zuckerberg was determined to weave Facebook’s services into other sites and platforms, believing it would stave off obsolescence and insulate Facebook from competition. Every corporate partner that integrated Facebook data into its online products helped drive the platform’s expansion, bringing in new users, spurring them to spend more time on Facebook and driving up advertising revenue. At the same time, Facebook got critical data back from its partners.

The partnerships were so important that decisions about forming them were vetted at high levels, sometimes by Mr. Zuckerberg and Sheryl Sandberg, the chief operating officer, Facebook officials said. While many of the partnerships were announced publicly, the details of the sharing arrangements typically were confidential.

Facebook also allowed Spotify, Netflix and the Royal Bank of Canada to read, write and delete users’ private messages, and to see all participants on a thread — privileges that appeared to go beyond what the companies needed to integrate Facebook into their systems, the records show. Facebook acknowledged that it did not consider any of those three companies to be service providers. Spokespeople for Spotify and Netflix said those companies were unaware of the broad powers Facebook had granted them. A spokesman for Netflix said Wednesday that it had used the access only to enable customers to recommend TV shows and movies to their friends.

A Royal Bank of Canada spokesman disputed that the bank had had any such access. (Aspects of some sharing partnerships, including those with the Royal Bank of Canada and Bing, were first reported by The Wall Street Journal.)

Spotify, which could view messages of more than 70 million users a month, still offers the option to share music through Facebook Messenger. But Netflix and the Canadian bank no longer needed access to messages because they had deactivated features that incorporated it.

These were not the only companies that had special access longer than they needed it. Yahoo, The Times and others could still get Facebook users’ personal information in 2017.

Yahoo could view real-time feeds of friends’ posts for a feature that the company had ended in 2012. A Yahoo spokesman declined to discuss the partnership in detail but said the company did not use the information for advertising. The Times — one of nine media companies named in the documents — had access to users’ friend lists for an article-sharing application it had discontinued in 2011. A spokeswoman for the news organization said it was not obtaining any data.

Facebook’s internal records also revealed more about the extent of sharing deals with over 60 makers of smartphones, tablets and other devices, agreements first reported by The Times in June.

Facebook empowered Apple to hide from Facebook users all indicators that its devices were asking for data. Apple devices also had access to the contact numbers and calendar entries of people who had changed their account settings to disable all sharing, the records show.

Apple officials said they were not aware that Facebook had granted its devices any special access. They added that any shared data remained on the devices and was not available to anyone other than the users.

Facebook has Filed a Patent To Calculate Your Future Location

Facebook has filed several patent applications with the U.S. Patent and Trademark Office for technology that uses your location data to predict where you’re going and when you’re going to be offline.

A May 30, 2017, Facebook application titled “Offline Trajectories” describes a method to predict where you’ll go next based on your location data. The technology described in the patent would calculate a “transition probability based at least in part on previously logged location data associated with a plurality of users who were at the current location.” In other words, the technology could also use the data of other people you know, as well as that of strangers, to make predictions. If the company could predict when you are about to be in an offline area, Facebook content “may be prefetched so that the user may have access to content during the period where there is a lack of connectivity.”

Another Facebook patent application titled “Location Prediction Using Wireless Signals on Online Social Networks” describes how tracking the strength of Wi-Fi, Bluetooth, cellular, and near-field communication (NFC) signals could be used to estimate your current location, in order to anticipate where you will go next. This “background signal” information is used as an alternative to GPS because, as the patent describes, it may provide “the advantage of more accurately or precisely determining a geographic location of a user.” The technology could learn the category of your current location (e.g., bar or gym), the time of your visit to the location, the hours that entity is open, and the popular hours of the entity.

Yet another Facebook patent application, “Predicting Locations and Movements of Users Based on Historical Locations for Users of an Online System,” further details how location data from multiple people would be used to glean location and movement trends and to model location chains. According to the patent application, these could be used for a “variety of applications,” including “advertising to users based on locations and for providing insights into the movements of users.” The technology could even differentiate movement trends among people who live in a city and who are just visiting a city.

Facebook Privacy Social Networks Internal Emails Show Facebook Weighing the Privacy Risks of Quietly Collecting Call and Text Records From Its Android Users—Then Going Ahead Anyway

Earlier this year, many Android users were shocked to discover that Facebook had been collecting a record of their call and SMS history, as revealed by the company’s data download tool. Now, internal emails released by the UK Parliament show how the decision was made internally.

According to the emails, developers knew the data was sensitive, but they still pushed to collect it as a way of expanding Facebook’s reach. The emails show Facebook’s growth team looking to call log data as a way to improve Facebook’s algorithms as well as to locate new contacts through the “People You May Know” feature. Notably, the project manager recognized it as “a pretty high-risk thing to do from a PR perspective,” but that risk seems to have been overwhelmed by the potential user growth.

Initially, the feature was intended to require users to opt in, typically through an in-app pop-up dialog box. But as developers looked for ways to get users signed up, it became clear that Android’s data permissions could be manipulated to automatically enroll users if the new feature was deployed in a certain way.

Facebook Filed A Patent To Predict Your Household’s Demographics Based On Family Photos

Facebook has submitted a patent application for technology that would predict who your family and other household members are, based on images and captions posted to Facebook, as well as your device information, like shared IP addresses. The application, titled “Predicting household demographics based on image data,” was originally filed May 10, 2017, and made public today.

The system Facebook proposes in its patent application would use facial recognition and learning models trained to understand text to help Facebook better understand whom you live with and interact with most. The technology described in the patent looks for clues in your profile pictures on Facebook and Instagram, as well as photos of you that you or your friends post.

It would note the people identified in a photo, and how frequently the people are included in your pictures. Then, it would assess information from comments on the photos, captions, or tags (#family, #mom, #kids) — anything that indicates whether someone is a husband, daughter, cousin, etc. — to predict what your family/household actually looks like. According to the patent application, Facebook’s prediction models would also analyze “messaging history, past tagging history, [and] web browsing history” to see if multiple people share IP addresses (a unique identifier for every internet network).