Archives 2018

Chinese schools enforce ‘smart uniforms’ with GPS tracking system to monitor students

Chinese schools have begun enforcing “smart uniforms” embedded with computer chips to monitor student movements and prevent them from skipping classes.

Eleven schools in the south-west province of Guizhou have introduced the uniforms, which were developed by local tech firm Guizhou Guanyu Technology.

As students enter the school, the time and date is recorded along with a short video that parents can access via a mobile app.

Facial recognition further ensures that each uniform is worn by its rightful owner to prevent students from cheating the system.

Skipping classes triggers an alarm to inform teachers and parents of the truancy, while an automatic voice alarm activates if a student walks out of school without permission.

A GPS system tracks student movements even beyond the school grounds.

The two chips — inserted into each uniform’s shoulders — can withstand up to 500 washes and 150 degrees Celsius, the company told state media Global Times.

Alarms will also sound if a student falls asleep in class, while parents can monitor purchases their child makes at the school and set spending limits via a mobile app, according to the company’s official website.

Stare Into The Lights My Pretties

Parents are using GPS ankle monitors to track their teenagers like criminals

There’s no shortage of GPS trackers for parents who want to keep tabs on their children’s driving. After all, car accidents are the leading cause of death for American teens. For some parents, that’s enough.

For others, nothing but a full-on ankle monitor—the kind used to track people released on bail or parole—will do.
Frank Kopczynski, the owner of Tampa Bay Monitoring (in fact located in Clearwater, Florida), also runs Action Plus Bail Bonds. He said the biggest challenge in strapping an ankle monitor to teenage non-offenders is whether they can remove it themselves.

“We provide a bracelet that is near-impossible to cut off,” Kopczynski told Quartz. “It also allows us to have two-way communication and gives us the option of sounding a piercing alarm.”

Along with the 95-decibel siren, if a teen is out past curfew, their parents can call Tampa Bay Monitoring’s office, and Kopczynski or one of his employees will activate the ankle monitor’s speaker and tell the child it’s time to get home or the police will be called. Hearing “this god-like voice out of nowhere” is generally effective, said Kopczynski; since the system is two-way, staff can also monitor the teen covertly.

The electronic monitoring industry has more than doubled in size in recent years, and has expanded well beyond its initial market of people on bail or parole. For example, immigrants detained by US authorities and put on an electronic monitoring program are required to pay extremely high fees—$880 for activation plus $420 a month— while they await their hearings.

As Facebook Raised a Privacy Wall, It Carved an Opening for Tech Giants

Internal documents show that the social network gave Microsoft, Amazon, Spotify and others far greater access to people’s data than it has disclosed.

For years, Facebook gave some of the world’s largest technology companies more intrusive access to users’ personal data than it has disclosed, effectively exempting those business partners from its usual privacy rules, according to internal records and interviews.

The special arrangements are detailed in hundreds of pages of Facebook documents obtained by The New York Times. The records, generated in 2017 by the company’s internal system for tracking partnerships, provide the most complete picture yet of the social network’s data-sharing practices. They also underscore how personal data has become the most prized commodity of the digital age, traded on a vast scale by some of the most powerful companies in Silicon Valley and beyond.

Facebook allowed Microsoft’s Bing search engine to see the names of virtually all Facebook users’ friends without consent, the records show, and gave Netflix and Spotify the ability to read Facebook users’ private messages.

The social network permitted Amazon to obtain users’ names and contact information through their friends, and it let Yahoo view streams of friends’ posts as recently as this summer, despite public statements that it had stopped that type of sharing years earlier.

Facebook has been reeling from a series of privacy scandals, set off by revelations in March that a political consulting firm, Cambridge Analytica, improperly used Facebook data to build tools that aided President Trump’s 2016 campaign. Acknowledging that it had breached users’ trust, Facebook insisted that it had instituted stricter privacy protections long ago. Mark Zuckerberg, the chief executive, assured lawmakers in April that people “have complete control” over everything they share on Facebook.

[Facebook’s strategy in times of crisis: delay, deny and deflect.]

Facebook began forming data partnerships when it was still a relatively young company. Mr. Zuckerberg was determined to weave Facebook’s services into other sites and platforms, believing it would stave off obsolescence and insulate Facebook from competition. Every corporate partner that integrated Facebook data into its online products helped drive the platform’s expansion, bringing in new users, spurring them to spend more time on Facebook and driving up advertising revenue. At the same time, Facebook got critical data back from its partners.

The partnerships were so important that decisions about forming them were vetted at high levels, sometimes by Mr. Zuckerberg and Sheryl Sandberg, the chief operating officer, Facebook officials said. While many of the partnerships were announced publicly, the details of the sharing arrangements typically were confidential.

Facebook also allowed Spotify, Netflix and the Royal Bank of Canada to read, write and delete users’ private messages, and to see all participants on a thread — privileges that appeared to go beyond what the companies needed to integrate Facebook into their systems, the records show. Facebook acknowledged that it did not consider any of those three companies to be service providers. Spokespeople for Spotify and Netflix said those companies were unaware of the broad powers Facebook had granted them. A spokesman for Netflix said Wednesday that it had used the access only to enable customers to recommend TV shows and movies to their friends.

A Royal Bank of Canada spokesman disputed that the bank had had any such access. (Aspects of some sharing partnerships, including those with the Royal Bank of Canada and Bing, were first reported by The Wall Street Journal.)

Spotify, which could view messages of more than 70 million users a month, still offers the option to share music through Facebook Messenger. But Netflix and the Canadian bank no longer needed access to messages because they had deactivated features that incorporated it.

These were not the only companies that had special access longer than they needed it. Yahoo, The Times and others could still get Facebook users’ personal information in 2017.

Yahoo could view real-time feeds of friends’ posts for a feature that the company had ended in 2012. A Yahoo spokesman declined to discuss the partnership in detail but said the company did not use the information for advertising. The Times — one of nine media companies named in the documents — had access to users’ friend lists for an article-sharing application it had discontinued in 2011. A spokeswoman for the news organization said it was not obtaining any data.

Facebook’s internal records also revealed more about the extent of sharing deals with over 60 makers of smartphones, tablets and other devices, agreements first reported by The Times in June.

Facebook empowered Apple to hide from Facebook users all indicators that its devices were asking for data. Apple devices also had access to the contact numbers and calendar entries of people who had changed their account settings to disable all sharing, the records show.

Apple officials said they were not aware that Facebook had granted its devices any special access. They added that any shared data remained on the devices and was not available to anyone other than the users.

Stare Into The Lights My Pretties

Amazon error allowed Alexa user to eavesdrop on another home

A user of Amazon’s Alexa voice assistant in Germany got access to more than a thousand recordings from another user because of “a human error” by the company.

The customer had asked to listen back to recordings of his own activities made by Alexa but he was also able to access 1,700 audio files from a stranger when Amazon sent him a link, German trade publication c’t reported.

On the recordings, a man and a female companion could be overheard in his home and the magazine was able to identify and contact him through the recorded information, according to the report.

Facebook has Filed a Patent To Calculate Your Future Location

Facebook has filed several patent applications with the U.S. Patent and Trademark Office for technology that uses your location data to predict where you’re going and when you’re going to be offline.

A May 30, 2017, Facebook application titled “Offline Trajectories” describes a method to predict where you’ll go next based on your location data. The technology described in the patent would calculate a “transition probability based at least in part on previously logged location data associated with a plurality of users who were at the current location.” In other words, the technology could also use the data of other people you know, as well as that of strangers, to make predictions. If the company could predict when you are about to be in an offline area, Facebook content “may be prefetched so that the user may have access to content during the period where there is a lack of connectivity.”

Another Facebook patent application titled “Location Prediction Using Wireless Signals on Online Social Networks” describes how tracking the strength of Wi-Fi, Bluetooth, cellular, and near-field communication (NFC) signals could be used to estimate your current location, in order to anticipate where you will go next. This “background signal” information is used as an alternative to GPS because, as the patent describes, it may provide “the advantage of more accurately or precisely determining a geographic location of a user.” The technology could learn the category of your current location (e.g., bar or gym), the time of your visit to the location, the hours that entity is open, and the popular hours of the entity.

Yet another Facebook patent application, “Predicting Locations and Movements of Users Based on Historical Locations for Users of an Online System,” further details how location data from multiple people would be used to glean location and movement trends and to model location chains. According to the patent application, these could be used for a “variety of applications,” including “advertising to users based on locations and for providing insights into the movements of users.” The technology could even differentiate movement trends among people who live in a city and who are just visiting a city.

YouTube’s Top-Earner For 2018 Is a 7-Year-Old

In 2018 the most-downloaded iPhone app was YouTube, reports USA Today, while Amazon’s best-selling item was their Fire TV Stick for streaming video. The No. 1 earner on YouTube this year is 7-year-old Ryan. For all those unboxing videos and playing with toys — and his own new line of toys at Walmart — he and his family will pull in a cool $22 million, according to Forbes. Ryan launched the channel in 2015 — when he was four — and now has 17.3 million followers.

Facebook Privacy Social Networks Internal Emails Show Facebook Weighing the Privacy Risks of Quietly Collecting Call and Text Records From Its Android Users—Then Going Ahead Anyway

Earlier this year, many Android users were shocked to discover that Facebook had been collecting a record of their call and SMS history, as revealed by the company’s data download tool. Now, internal emails released by the UK Parliament show how the decision was made internally.

According to the emails, developers knew the data was sensitive, but they still pushed to collect it as a way of expanding Facebook’s reach. The emails show Facebook’s growth team looking to call log data as a way to improve Facebook’s algorithms as well as to locate new contacts through the “People You May Know” feature. Notably, the project manager recognized it as “a pretty high-risk thing to do from a PR perspective,” but that risk seems to have been overwhelmed by the potential user growth.

Initially, the feature was intended to require users to opt in, typically through an in-app pop-up dialog box. But as developers looked for ways to get users signed up, it became clear that Android’s data permissions could be manipulated to automatically enroll users if the new feature was deployed in a certain way.

Thieves Are Boosting the Signal From Key Fobs Inside Homes To Steal Vehicles

According to Markham automotive security specialist Jeff Bates, owner of Lockdown Security, wireless key fobs have a role to play in many recent car thefts, with thieves intercepting and rerouting their signals — even from inside homes — to open and steal cars. According to Bates, many of these thieves are using a method called “relay theft.” Key fobs are constantly broadcasting a signal that communicates with a specific vehicle, he said, and when it comes into a close enough range, the vehicle will open and start. The thief will bring a device close to the home’s door, close to where most keys are sitting, to boost the fob’s signal. They leave another device near the vehicle, which receives the signal and opens the car. Many people don’t realize it, Bates said, but the thieves don’t need the fob in the car to drive it away.

An Eye-Scanning Lie Detector Is Forging a Dystopian Future

Sitting in front of a Converus EyeDetect station, it’s impossible not to think of Blade Runner. In the 1982 sci-fi classic, Harrison Ford’s rumpled detective identifies artificial humans using a steam-punk Voight-Kampff device that watches their eyes while they answer surreal questions. EyeDetect’s questions are less philosophical, and the penalty for failure is less fatal (Ford’s character would whip out a gun and shoot). But the basic idea is the same: By capturing imperceptible changes in a participant’s eyes — measuring things like pupil dilation and reaction time — the device aims to sort deceptive humanoids from genuine ones.

It claims to be, in short, a next-generation lie detector. Polygraph tests are a $2 billion industry in the US and, despite their inaccuracy, are widely used to screen candidates for government jobs. Released in 2014 by Converus, a Mark Cuban-funded startup, EyeDetect is pitched by its makers as a faster, cheaper, and more accurate alternative to the notoriously unreliable polygraph. By many measures, EyeDetect appears to be the future of lie detection — and it’s already being used by local and federal agencies to screen job applicants.

In documents obtained through public records requests, Converus says that the Defense Intelligence Agency and the US Customs and Border Protection are also trialing the technology. Converus says that individual locations of Best Western, FedEx, Four Points by Sheraton, McDonald’s, and IHOP chains have used the tech in Guatemala and Panama within the last three years. (A 1988 federal law prohibits most private companies from using any kind of lie detector on staff or recruits in America.) WIRED reached out to all five companies, but none were able to confirm that they had used EyeDetect.

Google personalizes search results even when you’re logged out

According to a new study conducted by Google competitor DuckDuckGo, it does not seem possible to avoid personalization when using Google search, even by logging out of your Google account and using the private browsing “incognito” mode.

DuckDuckGo conducted the study in June of this year, at the height of the US midterm election season. It did so with the ostensible goal of confirming whether Google’s search results exacerbate ideological bubbles by feeding you only information you’ve signaled you want to consume via past behavior and the data collected about you. It’s not clear whether that question can be reliably answered with these findings, and it’s also obvious DuckDuckGo is a biased source with something to gain by pointing out how flawed Google’s approach may be. But the study’s findings are nonetheless interesting because they highlight just how much variance there are in Google search results, even when controlling for factors like location.

AI Mistakes Ad On a Bus For an Actual CEO, Then Publicly Shames Them For ‘Jaywalking’

Since last year, many Chinese cities have cracked down on jaywalking by investing in facial recognition systems and AI-powered surveillance cameras. Jaywalkers are identified and shamed by displaying their photographs on large public screens… Developments are also underway to engage the country’s mobile network operators and social media platforms, such as Tencent Holdings’ WeChat and Sina Weibo, to establish a system in which offenders will receive personal text messages as soon as they are caught violating traffic rules….

Making a compelling case for change is the recent experience of Dong Mingzhu, chairwoman of China’s biggest maker of air conditioners Gree Electric Appliances, who found her face splashed on a huge screen erected along a street in the port city of Ningbo… That artificial intelligence-backed surveillance system, however, erred in capturing Dong’s image on Wednesday from an advertisement on the side of a moving bus. The traffic police in Ningbo, a city in the eastern coastal province of Zhejiang, were quick to recognise the mistake, writing in a post on microblog Sina Weibo on Wednesday that it had deleted the snapshot. It also said the surveillance system would be completely upgraded to cut incidents of false recognition in future.

Can Police control your self-driving car?

In 2009 GM equipped 17,000 of its units with “remote ignition block,” a kill switch that can turn off the engine if the car is stolen. But that was just the beginning.

Imagine this: You’re leaving work, walking to your car, and you find an empty parking spot — someone stole your brand new Tesla (or whatever fancy autonomous car you’re driving). When you call the police, they ask your permission for a “takeover,” which you promptly give them. Next thing you know, your car is driving itself to the nearest police station. And here’s the kicker — if the thief is inside he will remain locked inside until police can arrest them.

This futuristic and almost slapstick scenario is closer than we think, says Chief Innovation Officer Hans Schönfeld who works for the Dutch police. Currently, his team has already done several experiments to test the crime-halting possibilities of autonomous cars. “We wanted to know if we can make them stop or drive them to certain locations,” Schönfeld tells me. “And the result is: yes, we probably can.”

The Dutch police tested Tesla, Audi, Mercedes, and Toyota vehicles, he reports, adding “We do this in collaboration with these car companies because this information is valuable to them, too.

“If we can hack into their cars, others can as well.”

Companies ‘can sack workers for refusing to use fingerprint scanners’

Businesses using fingerprint scanners to monitor their workforce can legally sack employees who refuse to hand over biometric information on privacy grounds, the Fair Work Commission has ruled.

The ruling, which will be appealed, was made in the case of Jeremy Lee, a Queensland sawmill worker who refused to comply with a new fingerprint scanning policy introduced at his work in Imbil, north of the Sunshine Coast, late last year.

Fingerprint scanning was used to monitor the clock-on and clock-off times of about 150 sawmill workers at two sites and was preferred to swipe cards because it prevented workers from fraudulently signing in on behalf of their colleagues to mask absences.

The company, Superior Woods, had no privacy policy covering workers and failed to comply with a requirement to properly notify individuals about how and why their data was being collected and used. The biometric data was stored on servers located off-site, in space leased from a third party.

Lee argued the business had never sought its workers’ consent to use fingerprint scanning, and feared his biometric data would be accessed by unknown groups and individuals.

“I am unwilling to consent to have my fingerprints scanned because I regard my biometric data as personal and private,” Lee wrote to his employer last November.

“Information technology companies gather as much information/data on people as they can.

“Whether they admit to it or not. (See Edward Snowden) Such information is used as currency between corporations.”

Lee was neither antagonistic or belligerent in his refusals, according to evidence before the commission. He simply declined to have his fingerprints scanned and continued using a physical sign-in booklet to record his attendance.

He had not missed a shift in more than three years.

The employer warned him about his stance repeatedly, and claimed the fingerprint scanner did not actually record a fingerprint, but rather “a set of data measurements which is processed via an algorithm”. The employer told Lee there was no way the data could be “converted or used as a finger print”, and would only be used to link to his payroll number to his clock-on and clock-off time. It said the fingerprint scanners were also needed for workplace safety, to accurately identify which workers were on site in the event of an accident.

Lee was given a final warning in January, and responded that he valued his job a “great deal” and wanted to find an alternative way to record his attendance.

“I would love to continue to work for Superior Wood as it is a good, reliable place to work,” he wrote to his employer. “However, I do not consent to my biometric data being taken. The reason for writing this letter is to impress upon you that I am in earnest and hope there is a way we can negotiate a satisfactory outcome.”

Lee was sacked in February, and lodged an unfair dismissal claim in the Fair Work Commission.

He argued he was sacked for failing to comply with an unreasonable direction, because the fingerprint scanning was in breach of Australian privacy laws. His biometric information was sent to a separate corporate entity that was not his employer, Lee argued. His employer had no privacy policy in place at the time, and he argued it had failed to issue a privacy collection notice to its employees, as required by law. Lee argued the company had effectively breached the privacy of its 150 workers twice a day, every day since fingerprint scanning was introduced.

But the unfair dismissal claim failed. The Fair Work Commission found the site attendance policy that Lee had breached was lawful. It found that although the company may have breached privacy laws, the site-attendance policy was not automatically rendered unlawful as it related to Lee.

“While there may have been a breach of the Privacy Act relevant to the notice given to employees, the private and sensitive information was not collected and would never be collected relevant to Mr Lee because of his steadfast refusal,” the commission found. “The policy itself is not unlawful, simply the manner in which the employer went about trying to obtain consent may have constituted a breach of the Privacy Act.”

Lee told Guardian Australia he planned to appeal. He said the ruling implied that Australians only owned their biometric data until an employer demanded it, at which point they could be sacked if they refused to consent.

“My biometric data is inherently mine and inseparable from me,” Lee said. “My employer can’t demand it or sack me for refusing to give it.”

“It’s not about this particular employer. Ownership to me means that I can refuse consent without being sacked.”

NBCUniversal Taps Machine Learning to Tie Ads to Relevant Moments on TV

NBCUniversal announced a new machine learning tool today that helps brands place ads around scenes relevant to their product across any of the media giant’s broadcast and cable properties. The Contextual Intelligence Platform analyzes programming scripts, closed captioning data and visual descriptors of both ads and shows to find opportune moments for a given advertiser to appear as well as an emotional gauge for each scene determined by proprietary algorithms.

Focus groups for ads placed with the platform thus far have shown an average bump of 19 percent in brand memorability, 13 percent in likability and 64 percent in message memorability, according to Josh Feldman, vp and head of marketing and advertising creative, NBCU. The announcement comes as linear television providers continue to grapple with how to bring digital targeting practices to a medium that still largely operates on traditional phone-call media buying and manual ad placements. NBCU is now working with three to five advertisers for the system’s beta-test, and is aiming for an official release date early next year.

My devices are sending and receiving data every two seconds, sometimes even when I sleep

blockquote>When I decided to record every time my phone or laptop contacted a server on the internet, I knew I’d get a lot of data, but I honestly didn’t think it would reveal nearly 300,000 requests in a single week.

On average, that’s about one request every two seconds.

Are your devices sending and receiving data when you’re not using them?

They sure are. The quietest times fall — predictably — overnight. But even while I’m sleeping my devices are pretty busy talking to various companies. For example, here are the 841 times my devices made contact with 46 different domains between 10pm and 6:30am on the second night of the experiment. Most of these requests are background updates for things like my email and calendar or synchronisation that various apps like Dropbox or iCloud perform.

But exactly what each of them is doing is quite difficult to tell.

“Influencers” Are Being Paid Big Sums To Pitch Products and Thrash Rivals on Instagram and YouTube

“Influencers” are being paid big sums to pitch products on Instagram and YouTube. If you’re trying to grow a product on social media, you either fork over cash or pay in another way. This is the murky world of influencing, reports Wired. Brands will pay influencers to position products on their desks, behind them, or anywhere else they can subtly appear on screen. Payouts increase if an influencer tags a brand in a post or includes a link, but silent endorsements are often preferred.

Marketers of literature, wellness, fashion, entertainment, and other wares are all hooked on influencers. As brands have warmed to social-media advertising, influencer marketing has grown into a multibillion-dollar industry. Unlike traditional television or print ads, influencers have dedicated niche followings who take their word as gospel.

There’s another plus: Many users don’t view influencers as paid endorsers or salespeople—even though a significant percentage are—but as trusted experts, friends, and “real” people. This perceived authenticity is part of why brands shell out so much cash in exchange for a brief appearance in your Instagram feed.

Digital India: Government Hands Out Free Phones to Win Votes

In the state of Chhattisgarh, the chief minister, Raman Singh, has promised a smartphone in every home — and he is using the government-issued devices to reach voters as he campaigns in legislative elections that conclude on Tuesday.

The phones are the latest twist in digital campaigning by the B.J.P., which controls the national and state government and is deft at using tools like WhatsApp groups and Facebook posts to influence voters. The B.J.P. government in Rajasthan, which holds state elections next month, is also subsidizing phones and data plans for residents, and party leaders are considering extending the model to other states.

French Officer Caught Selling Access To State Surveillance Systems

A French police officer has been charged and arrested last week for selling confidential data on the dark web in exchange for Bitcoin,” reports ZDNet. French authorities caught him after they took down the “Black Hand” dark web marketplace. Sifting through the marketplace data, they found French police documents sold on the site. All the documents had unique identifiers, which they used to track down the French police officer who was selling the data under the name of Haurus.

Besides selling access to official docs, they also found he ran a service to track the location of mobile devices based on a supplied phone number. He advertised the system as a way to track spouses or members of competing criminal gangs. Investigators believe Haurus was using the French police resources designed with the intention to track criminals for this service. He also advertised a service that told buyers if they were tracked by French police and what information officers had on them.

Fake fingerprints can imitate real ones in biometric systems

Researchers have used a neural network to generate artificial fingerprints that work as a “master key” for biometric identification systems and prove fake fingerprints can be created.

According to a paper presented at a security conference in Los Angeles, the artificially generated fingerprints, dubbed “DeepMasterPrints” by the researchers from New York University, were able to imitate more than one in five fingerprints in a biometric system that should only have an error rate of one in a thousand.

The researchers, led by NYU’s Philip Bontrager, say that “the underlying method is likely to have broad applications in fingerprint security as well as fingerprint synthesis.” As with much security research, demonstrating flaws in existing authentication systems is considered to be an important part of developing more secure replacements in the future.

In order to work, the DeepMasterPrints take advantage of two properties of fingerprint-based authentication systems. The first is that, for ergonomic reasons, most fingerprint readers do not read the entire finger at once, instead imaging whichever part of the finger touches the scanner.

Crucially, such systems do not blend all the partial images in order to compare the full finger against a full record; instead, they simply compare the partial scan against the partial records. That means that an attacker has to match just one of tens or hundreds of saved partial fingerprint in order to be granted access.

The second is that some features of fingerprints are more common than others. That means that a fake print that contains a lot of very common features is more likely to match with other fingerprints than pure chance would suggest.

Based on those insights, the researchers used a common machine learning technique, called a generative adversarial network, to artificially create new fingerprints that matched as many partial fingerprints as possible.

The neural network not only allowed them to create multiple fingerprint images, it also created fakes which look convincingly like a real fingerprint to a human eye – an improvement on a previous technique, which created jagged, right-angled fingerprints that would fool a scanner but not a visual inspection.

They compare the method to a “dictionary attack” against passwords, where a hacker runs a pre-generated list of common passwords against a security system.

Such attacks may not be able to break into any specific account, but when used against accounts at scale, they generate enough successes to be worth the effort.

Facebook Filed A Patent To Predict Your Household’s Demographics Based On Family Photos

Facebook has submitted a patent application for technology that would predict who your family and other household members are, based on images and captions posted to Facebook, as well as your device information, like shared IP addresses. The application, titled “Predicting household demographics based on image data,” was originally filed May 10, 2017, and made public today.

The system Facebook proposes in its patent application would use facial recognition and learning models trained to understand text to help Facebook better understand whom you live with and interact with most. The technology described in the patent looks for clues in your profile pictures on Facebook and Instagram, as well as photos of you that you or your friends post.

It would note the people identified in a photo, and how frequently the people are included in your pictures. Then, it would assess information from comments on the photos, captions, or tags (#family, #mom, #kids) — anything that indicates whether someone is a husband, daughter, cousin, etc. — to predict what your family/household actually looks like. According to the patent application, Facebook’s prediction models would also analyze “messaging history, past tagging history, [and] web browsing history” to see if multiple people share IP addresses (a unique identifier for every internet network).