Resources

Facebook has Filed a Patent To Calculate Your Future Location

Facebook has filed several patent applications with the U.S. Patent and Trademark Office for technology that uses your location data to predict where you’re going and when you’re going to be offline.

A May 30, 2017, Facebook application titled “Offline Trajectories” describes a method to predict where you’ll go next based on your location data. The technology described in the patent would calculate a “transition probability based at least in part on previously logged location data associated with a plurality of users who were at the current location.” In other words, the technology could also use the data of other people you know, as well as that of strangers, to make predictions. If the company could predict when you are about to be in an offline area, Facebook content “may be prefetched so that the user may have access to content during the period where there is a lack of connectivity.”

Another Facebook patent application titled “Location Prediction Using Wireless Signals on Online Social Networks” describes how tracking the strength of Wi-Fi, Bluetooth, cellular, and near-field communication (NFC) signals could be used to estimate your current location, in order to anticipate where you will go next. This “background signal” information is used as an alternative to GPS because, as the patent describes, it may provide “the advantage of more accurately or precisely determining a geographic location of a user.” The technology could learn the category of your current location (e.g., bar or gym), the time of your visit to the location, the hours that entity is open, and the popular hours of the entity.

Yet another Facebook patent application, “Predicting Locations and Movements of Users Based on Historical Locations for Users of an Online System,” further details how location data from multiple people would be used to glean location and movement trends and to model location chains. According to the patent application, these could be used for a “variety of applications,” including “advertising to users based on locations and for providing insights into the movements of users.” The technology could even differentiate movement trends among people who live in a city and who are just visiting a city.

YouTube’s Top-Earner For 2018 Is a 7-Year-Old

In 2018 the most-downloaded iPhone app was YouTube, reports USA Today, while Amazon’s best-selling item was their Fire TV Stick for streaming video. The No. 1 earner on YouTube this year is 7-year-old Ryan. For all those unboxing videos and playing with toys — and his own new line of toys at Walmart — he and his family will pull in a cool $22 million, according to Forbes. Ryan launched the channel in 2015 — when he was four — and now has 17.3 million followers.

Facebook Privacy Social Networks Internal Emails Show Facebook Weighing the Privacy Risks of Quietly Collecting Call and Text Records From Its Android Users—Then Going Ahead Anyway

Earlier this year, many Android users were shocked to discover that Facebook had been collecting a record of their call and SMS history, as revealed by the company’s data download tool. Now, internal emails released by the UK Parliament show how the decision was made internally.

According to the emails, developers knew the data was sensitive, but they still pushed to collect it as a way of expanding Facebook’s reach. The emails show Facebook’s growth team looking to call log data as a way to improve Facebook’s algorithms as well as to locate new contacts through the “People You May Know” feature. Notably, the project manager recognized it as “a pretty high-risk thing to do from a PR perspective,” but that risk seems to have been overwhelmed by the potential user growth.

Initially, the feature was intended to require users to opt in, typically through an in-app pop-up dialog box. But as developers looked for ways to get users signed up, it became clear that Android’s data permissions could be manipulated to automatically enroll users if the new feature was deployed in a certain way.

Thieves Are Boosting the Signal From Key Fobs Inside Homes To Steal Vehicles

According to Markham automotive security specialist Jeff Bates, owner of Lockdown Security, wireless key fobs have a role to play in many recent car thefts, with thieves intercepting and rerouting their signals — even from inside homes — to open and steal cars. According to Bates, many of these thieves are using a method called “relay theft.” Key fobs are constantly broadcasting a signal that communicates with a specific vehicle, he said, and when it comes into a close enough range, the vehicle will open and start. The thief will bring a device close to the home’s door, close to where most keys are sitting, to boost the fob’s signal. They leave another device near the vehicle, which receives the signal and opens the car. Many people don’t realize it, Bates said, but the thieves don’t need the fob in the car to drive it away.

An Eye-Scanning Lie Detector Is Forging a Dystopian Future

Sitting in front of a Converus EyeDetect station, it’s impossible not to think of Blade Runner. In the 1982 sci-fi classic, Harrison Ford’s rumpled detective identifies artificial humans using a steam-punk Voight-Kampff device that watches their eyes while they answer surreal questions. EyeDetect’s questions are less philosophical, and the penalty for failure is less fatal (Ford’s character would whip out a gun and shoot). But the basic idea is the same: By capturing imperceptible changes in a participant’s eyes — measuring things like pupil dilation and reaction time — the device aims to sort deceptive humanoids from genuine ones.

It claims to be, in short, a next-generation lie detector. Polygraph tests are a $2 billion industry in the US and, despite their inaccuracy, are widely used to screen candidates for government jobs. Released in 2014 by Converus, a Mark Cuban-funded startup, EyeDetect is pitched by its makers as a faster, cheaper, and more accurate alternative to the notoriously unreliable polygraph. By many measures, EyeDetect appears to be the future of lie detection — and it’s already being used by local and federal agencies to screen job applicants.

In documents obtained through public records requests, Converus says that the Defense Intelligence Agency and the US Customs and Border Protection are also trialing the technology. Converus says that individual locations of Best Western, FedEx, Four Points by Sheraton, McDonald’s, and IHOP chains have used the tech in Guatemala and Panama within the last three years. (A 1988 federal law prohibits most private companies from using any kind of lie detector on staff or recruits in America.) WIRED reached out to all five companies, but none were able to confirm that they had used EyeDetect.

Google personalizes search results even when you’re logged out

According to a new study conducted by Google competitor DuckDuckGo, it does not seem possible to avoid personalization when using Google search, even by logging out of your Google account and using the private browsing “incognito” mode.

DuckDuckGo conducted the study in June of this year, at the height of the US midterm election season. It did so with the ostensible goal of confirming whether Google’s search results exacerbate ideological bubbles by feeding you only information you’ve signaled you want to consume via past behavior and the data collected about you. It’s not clear whether that question can be reliably answered with these findings, and it’s also obvious DuckDuckGo is a biased source with something to gain by pointing out how flawed Google’s approach may be. But the study’s findings are nonetheless interesting because they highlight just how much variance there are in Google search results, even when controlling for factors like location.

AI Mistakes Ad On a Bus For an Actual CEO, Then Publicly Shames Them For ‘Jaywalking’

Since last year, many Chinese cities have cracked down on jaywalking by investing in facial recognition systems and AI-powered surveillance cameras. Jaywalkers are identified and shamed by displaying their photographs on large public screens… Developments are also underway to engage the country’s mobile network operators and social media platforms, such as Tencent Holdings’ WeChat and Sina Weibo, to establish a system in which offenders will receive personal text messages as soon as they are caught violating traffic rules….

Making a compelling case for change is the recent experience of Dong Mingzhu, chairwoman of China’s biggest maker of air conditioners Gree Electric Appliances, who found her face splashed on a huge screen erected along a street in the port city of Ningbo… That artificial intelligence-backed surveillance system, however, erred in capturing Dong’s image on Wednesday from an advertisement on the side of a moving bus. The traffic police in Ningbo, a city in the eastern coastal province of Zhejiang, were quick to recognise the mistake, writing in a post on microblog Sina Weibo on Wednesday that it had deleted the snapshot. It also said the surveillance system would be completely upgraded to cut incidents of false recognition in future.

Can Police control your self-driving car?

In 2009 GM equipped 17,000 of its units with “remote ignition block,” a kill switch that can turn off the engine if the car is stolen. But that was just the beginning.

Imagine this: You’re leaving work, walking to your car, and you find an empty parking spot — someone stole your brand new Tesla (or whatever fancy autonomous car you’re driving). When you call the police, they ask your permission for a “takeover,” which you promptly give them. Next thing you know, your car is driving itself to the nearest police station. And here’s the kicker — if the thief is inside he will remain locked inside until police can arrest them.

This futuristic and almost slapstick scenario is closer than we think, says Chief Innovation Officer Hans Schönfeld who works for the Dutch police. Currently, his team has already done several experiments to test the crime-halting possibilities of autonomous cars. “We wanted to know if we can make them stop or drive them to certain locations,” Schönfeld tells me. “And the result is: yes, we probably can.”

The Dutch police tested Tesla, Audi, Mercedes, and Toyota vehicles, he reports, adding “We do this in collaboration with these car companies because this information is valuable to them, too.

“If we can hack into their cars, others can as well.”

Companies ‘can sack workers for refusing to use fingerprint scanners’

Businesses using fingerprint scanners to monitor their workforce can legally sack employees who refuse to hand over biometric information on privacy grounds, the Fair Work Commission has ruled.

The ruling, which will be appealed, was made in the case of Jeremy Lee, a Queensland sawmill worker who refused to comply with a new fingerprint scanning policy introduced at his work in Imbil, north of the Sunshine Coast, late last year.

Fingerprint scanning was used to monitor the clock-on and clock-off times of about 150 sawmill workers at two sites and was preferred to swipe cards because it prevented workers from fraudulently signing in on behalf of their colleagues to mask absences.

The company, Superior Woods, had no privacy policy covering workers and failed to comply with a requirement to properly notify individuals about how and why their data was being collected and used. The biometric data was stored on servers located off-site, in space leased from a third party.

Lee argued the business had never sought its workers’ consent to use fingerprint scanning, and feared his biometric data would be accessed by unknown groups and individuals.

“I am unwilling to consent to have my fingerprints scanned because I regard my biometric data as personal and private,” Lee wrote to his employer last November.

“Information technology companies gather as much information/data on people as they can.

“Whether they admit to it or not. (See Edward Snowden) Such information is used as currency between corporations.”

Lee was neither antagonistic or belligerent in his refusals, according to evidence before the commission. He simply declined to have his fingerprints scanned and continued using a physical sign-in booklet to record his attendance.

He had not missed a shift in more than three years.

The employer warned him about his stance repeatedly, and claimed the fingerprint scanner did not actually record a fingerprint, but rather “a set of data measurements which is processed via an algorithm”. The employer told Lee there was no way the data could be “converted or used as a finger print”, and would only be used to link to his payroll number to his clock-on and clock-off time. It said the fingerprint scanners were also needed for workplace safety, to accurately identify which workers were on site in the event of an accident.

Lee was given a final warning in January, and responded that he valued his job a “great deal” and wanted to find an alternative way to record his attendance.

“I would love to continue to work for Superior Wood as it is a good, reliable place to work,” he wrote to his employer. “However, I do not consent to my biometric data being taken. The reason for writing this letter is to impress upon you that I am in earnest and hope there is a way we can negotiate a satisfactory outcome.”

Lee was sacked in February, and lodged an unfair dismissal claim in the Fair Work Commission.

He argued he was sacked for failing to comply with an unreasonable direction, because the fingerprint scanning was in breach of Australian privacy laws. His biometric information was sent to a separate corporate entity that was not his employer, Lee argued. His employer had no privacy policy in place at the time, and he argued it had failed to issue a privacy collection notice to its employees, as required by law. Lee argued the company had effectively breached the privacy of its 150 workers twice a day, every day since fingerprint scanning was introduced.

But the unfair dismissal claim failed. The Fair Work Commission found the site attendance policy that Lee had breached was lawful. It found that although the company may have breached privacy laws, the site-attendance policy was not automatically rendered unlawful as it related to Lee.

“While there may have been a breach of the Privacy Act relevant to the notice given to employees, the private and sensitive information was not collected and would never be collected relevant to Mr Lee because of his steadfast refusal,” the commission found. “The policy itself is not unlawful, simply the manner in which the employer went about trying to obtain consent may have constituted a breach of the Privacy Act.”

Lee told Guardian Australia he planned to appeal. He said the ruling implied that Australians only owned their biometric data until an employer demanded it, at which point they could be sacked if they refused to consent.

“My biometric data is inherently mine and inseparable from me,” Lee said. “My employer can’t demand it or sack me for refusing to give it.”

“It’s not about this particular employer. Ownership to me means that I can refuse consent without being sacked.”

NBCUniversal Taps Machine Learning to Tie Ads to Relevant Moments on TV

NBCUniversal announced a new machine learning tool today that helps brands place ads around scenes relevant to their product across any of the media giant’s broadcast and cable properties. The Contextual Intelligence Platform analyzes programming scripts, closed captioning data and visual descriptors of both ads and shows to find opportune moments for a given advertiser to appear as well as an emotional gauge for each scene determined by proprietary algorithms.

Focus groups for ads placed with the platform thus far have shown an average bump of 19 percent in brand memorability, 13 percent in likability and 64 percent in message memorability, according to Josh Feldman, vp and head of marketing and advertising creative, NBCU. The announcement comes as linear television providers continue to grapple with how to bring digital targeting practices to a medium that still largely operates on traditional phone-call media buying and manual ad placements. NBCU is now working with three to five advertisers for the system’s beta-test, and is aiming for an official release date early next year.

My devices are sending and receiving data every two seconds, sometimes even when I sleep

blockquote>When I decided to record every time my phone or laptop contacted a server on the internet, I knew I’d get a lot of data, but I honestly didn’t think it would reveal nearly 300,000 requests in a single week.

On average, that’s about one request every two seconds.

Are your devices sending and receiving data when you’re not using them?

They sure are. The quietest times fall — predictably — overnight. But even while I’m sleeping my devices are pretty busy talking to various companies. For example, here are the 841 times my devices made contact with 46 different domains between 10pm and 6:30am on the second night of the experiment. Most of these requests are background updates for things like my email and calendar or synchronisation that various apps like Dropbox or iCloud perform.

But exactly what each of them is doing is quite difficult to tell.

“Influencers” Are Being Paid Big Sums To Pitch Products and Thrash Rivals on Instagram and YouTube

“Influencers” are being paid big sums to pitch products on Instagram and YouTube. If you’re trying to grow a product on social media, you either fork over cash or pay in another way. This is the murky world of influencing, reports Wired. Brands will pay influencers to position products on their desks, behind them, or anywhere else they can subtly appear on screen. Payouts increase if an influencer tags a brand in a post or includes a link, but silent endorsements are often preferred.

Marketers of literature, wellness, fashion, entertainment, and other wares are all hooked on influencers. As brands have warmed to social-media advertising, influencer marketing has grown into a multibillion-dollar industry. Unlike traditional television or print ads, influencers have dedicated niche followings who take their word as gospel.

There’s another plus: Many users don’t view influencers as paid endorsers or salespeople—even though a significant percentage are—but as trusted experts, friends, and “real” people. This perceived authenticity is part of why brands shell out so much cash in exchange for a brief appearance in your Instagram feed.

Digital India: Government Hands Out Free Phones to Win Votes

In the state of Chhattisgarh, the chief minister, Raman Singh, has promised a smartphone in every home — and he is using the government-issued devices to reach voters as he campaigns in legislative elections that conclude on Tuesday.

The phones are the latest twist in digital campaigning by the B.J.P., which controls the national and state government and is deft at using tools like WhatsApp groups and Facebook posts to influence voters. The B.J.P. government in Rajasthan, which holds state elections next month, is also subsidizing phones and data plans for residents, and party leaders are considering extending the model to other states.

French Officer Caught Selling Access To State Surveillance Systems

A French police officer has been charged and arrested last week for selling confidential data on the dark web in exchange for Bitcoin,” reports ZDNet. French authorities caught him after they took down the “Black Hand” dark web marketplace. Sifting through the marketplace data, they found French police documents sold on the site. All the documents had unique identifiers, which they used to track down the French police officer who was selling the data under the name of Haurus.

Besides selling access to official docs, they also found he ran a service to track the location of mobile devices based on a supplied phone number. He advertised the system as a way to track spouses or members of competing criminal gangs. Investigators believe Haurus was using the French police resources designed with the intention to track criminals for this service. He also advertised a service that told buyers if they were tracked by French police and what information officers had on them.

Fake fingerprints can imitate real ones in biometric systems

Researchers have used a neural network to generate artificial fingerprints that work as a “master key” for biometric identification systems and prove fake fingerprints can be created.

According to a paper presented at a security conference in Los Angeles, the artificially generated fingerprints, dubbed “DeepMasterPrints” by the researchers from New York University, were able to imitate more than one in five fingerprints in a biometric system that should only have an error rate of one in a thousand.

The researchers, led by NYU’s Philip Bontrager, say that “the underlying method is likely to have broad applications in fingerprint security as well as fingerprint synthesis.” As with much security research, demonstrating flaws in existing authentication systems is considered to be an important part of developing more secure replacements in the future.

In order to work, the DeepMasterPrints take advantage of two properties of fingerprint-based authentication systems. The first is that, for ergonomic reasons, most fingerprint readers do not read the entire finger at once, instead imaging whichever part of the finger touches the scanner.

Crucially, such systems do not blend all the partial images in order to compare the full finger against a full record; instead, they simply compare the partial scan against the partial records. That means that an attacker has to match just one of tens or hundreds of saved partial fingerprint in order to be granted access.

The second is that some features of fingerprints are more common than others. That means that a fake print that contains a lot of very common features is more likely to match with other fingerprints than pure chance would suggest.

Based on those insights, the researchers used a common machine learning technique, called a generative adversarial network, to artificially create new fingerprints that matched as many partial fingerprints as possible.

The neural network not only allowed them to create multiple fingerprint images, it also created fakes which look convincingly like a real fingerprint to a human eye – an improvement on a previous technique, which created jagged, right-angled fingerprints that would fool a scanner but not a visual inspection.

They compare the method to a “dictionary attack” against passwords, where a hacker runs a pre-generated list of common passwords against a security system.

Such attacks may not be able to break into any specific account, but when used against accounts at scale, they generate enough successes to be worth the effort.

Facebook Filed A Patent To Predict Your Household’s Demographics Based On Family Photos

Facebook has submitted a patent application for technology that would predict who your family and other household members are, based on images and captions posted to Facebook, as well as your device information, like shared IP addresses. The application, titled “Predicting household demographics based on image data,” was originally filed May 10, 2017, and made public today.

The system Facebook proposes in its patent application would use facial recognition and learning models trained to understand text to help Facebook better understand whom you live with and interact with most. The technology described in the patent looks for clues in your profile pictures on Facebook and Instagram, as well as photos of you that you or your friends post.

It would note the people identified in a photo, and how frequently the people are included in your pictures. Then, it would assess information from comments on the photos, captions, or tags (#family, #mom, #kids) — anything that indicates whether someone is a husband, daughter, cousin, etc. — to predict what your family/household actually looks like. According to the patent application, Facebook’s prediction models would also analyze “messaging history, past tagging history, [and] web browsing history” to see if multiple people share IP addresses (a unique identifier for every internet network).

Dutch Government Report Says Microsoft Office Telemetry Collection Breaks EU GDPR Laws

Microsoft broke Euro privacy rules by carrying out the “large scale and covert” gathering of private data through its Office apps, according to a report commissioned by the Dutch government.

It was found that Microsoft was collecting telemetry and other content from its Office applications, including email titles and sentences where translation or spellchecker was used, and secretly storing the data on systems in the United States.

Those actions break Europe’s new GDPR privacy safeguards, it is claimed, and may put Microsoft on the hook for potentially tens of millions of dollars in fines. The Dutch authorities are working with the corporation to fix the situation, and are using the threat of a fine as a stick to make it happen.

The investigation was jumpstarted by the fact that Microsoft doesn’t publicly reveal what information it gathers on users and doesn’t provide an option for turning off diagnostic and telemetry data sent by its Office software to the company as a way of monitoring how well it is functioning and identifying any software issues.

Algorithms viewed as ‘unfair’ by consumers

The US-based Pew Research Center has found the American public is growing increasingly distrustful of the use of computer algorithms in a variety of sectors, including finance, media and the justice system.

report released over the weekend found that a broad section of those surveyed feel that computer programs will always reflect some level of human bias, that they might violate privacy, fail to capture the nuance of human complexity or simply be unfair.

Scope creep with Australia metadata retention

Telecommunications industry group Communications Alliance has revealed details of dozens of state and federal departments and agencies it claims are accessing so-called communications ‘metadata’.

The 2015 legislation that introduced the data retention regime authorised a list of “criminal law-enforcement agencies” to obtain warrant-free access to metadata. Those agencies included federal, state and territory police agencies, a number of anti-corruption bodies, Border Force, the Australian Securities and Investments Commission; and the Australian Competition and Consumer Commission.

However, last month at the hearing of an inquiry into the government’s bill aimed at enhancing police access to encrypted communications services, Communications Alliance CEO John Stanton said that a significantly larger number of organisations were accessing information kept by telcos to meet their data retention obligations.

In addition to police agencies and other organisations listed in the data retention legislation, it includes Centrelink, the Australian Taxation Office, Australia Post’s Corporate Security Group, Workplace Health and Safety, Work Safe Victoria, the Taxi Services Commission and a number of local councils.

Australia’s near-real-time facial recognition system, chilling effects

Civil rights groups have warned a vast, powerful system allowing the near real-time matching of citizens’ facial images risks a “profound chilling effect” on protest and dissent.

The technology – known in shorthand as “the capability” – collects and pools facial imagery from various state and federal government sources, including driver’s licences, passports and visas.

The biometric information can then rapidly – almost in real time – be compared with other sources, such as CCTV footage, to match identities.

The system, chiefly controlled by the federal Department of Home Affairs, is designed to give intelligence and security agencies a powerful tool to deter identity crime, and quickly identify terror and crime suspects.

But it has prompted serious concern among academics, human rights groups and privacy experts. The system sweeps up and processes citizens’ sensitive biometric information regardless of whether they have committed or are suspected of an offence.