Archives 2019

Facebook Confirms Its ‘Standards’ Don’t Apply To Politicians

Facebook this week finally put into writing what users — especially politically powerful users — have known for years: its community “standards” do not, in fact, apply across the whole community. Speech from politicians is officially exempt from the platform’s fact checking and decency standards, the company has clarified, with a few exceptions. Facebook communications VP Nick Clegg, himself a former member of the UK Parliament, outlined the policy in a speech and company blog post Tuesday. Facebook has had a “newsworthiness exemption” to its content guidelines since 2016. That policy was formalized in late October of that year amid a contentious and chaotic US political season and three weeks before the presidential election that would land Donald Trump the White House.

Facebook at the time was uncertain how to handle posts from the Trump campaign, The Wall Street Journal reported. Sources told the paper that Facebook employees were sharply divided over the candidate’s rhetoric about Muslim immigrants and his stated desire for a Muslim travel ban, which several felt were in violation of the service’s hate speech standards. Eventually, the sources said, CEO Mark Zuckerberg weighed in directly and said it would be inappropriate to intervene. Months later, Facebook finally issued its policy. “We’re going to begin allowing more items that people find newsworthy, significant, or important to the public interest — even if they might otherwise violate our standards,” Facebook wrote at the time.
Facebook by default “will treat speech from politicians as newsworthy content that should, as a general rule, be seen and heard.” It won’t be subject to fact-checking because the company does not believe that it is appropriate for it to “referee political debates” or prevent a politician’s speech from both reaching its intended audience and “being subject to public debate and scrutiny.”

Newsworthiness, Clegg added, will be determined by weighing the “public interest value of the piece of speech” against the risk of harm. The exception to all of this is advertising. “Standards are different for content for which the company receives payment, so if someone — even a politician or political candidate — posts ads to Facebook, those ads in theory must still meet both the community standards and Facebook’s advertising policies,” reports Ars.

Politicians Can Break Our Content Rules, YouTube CEO Says

YouTube CEO Susan Wojcicki said this week that content by politicians would stay up on the video-sharing website even if it violates the company’s standards, echoing a position staked out by Facebook this week.

“When you have a political officer that is making information that is really important for their constituents to see, or for other global leaders to see, that is content that we would leave up because we think it’s important for other people to see,” Wojcicki told an audience at The Atlantic Festival this morning. Wojcicki said the news media is likely to cover controversial content regardless of whether it’s taken down, giving context to understand it. YouTube is owned by Google. A YouTube spokesperson later told POLITICO that politicians are not treated differently than other users and must abide by its community guidelines. The company grants exemptions to some political speech if the company considers it to be educational, documentary, scientific, or artistic in nature.

Uber Stopped Its Own Investigators From Reporting Crimes To the Police

The special investigations team inside Uber, which fields complaints from riders and drivers, is not allowed to escalate those issues to law enforcement or file official police reports “even when they get confessions of felonies,” according to The Washington Post. They are also not allowed to advise victims or potential victims of crimes to seek legal counsel, according to the report, which was based on interviews with “more than 20 current and former investigators” who worked at Uber’s investigations unit in Arizona.

The investigators are also allegedly instructed to “first to protect Uber” and make sure it is “not held liable” for any crimes that are committed by people using the company’s ride-hailing platform. In that vein, the investigators told the paper that even the language they use when communicating with alleged victims is carefully worded to avoid the appearance that Uber is taking a side. The investigators also said they’re not supposed to specifically ask alleged perpetrators about claims against them.

Vimeo Sued For Storing Faceprints of People Without Their Consent

Vimeo is collecting and storing thousands of people’s facial biometrics without their permission or knowledge, according to a complaint filed on September 20 on behalf of potentially thousands of plaintiffs under the Illinois Biometric Information Privacy Act (BIPA).

The suit takes aim at Vimeo’s Magisto application: a short-form video creation platform purchased by Vimeo in April 2019 that uses facial recognition to automatically index the faces of people in videos so they can be face-tagged. BIPA bans collecting and storing biometric data without explicit consent, including “faceprints.” The complaint against Vimeo claims that users of Magisto “upload millions of videos and/or photos per day, making videos and photographs a vital part of the Magisto experience.”

The complaint maintains that unbeknownst to the average consumer, Magisto scans “each and every video and photo uploaded to Magisto for faces” and analyzes “biometric identifiers,” including facial geometry, to “create and store a template for each face.” That template is later used to “organize and group together videos based upon the particular individuals appearing in the videos” by “comparing the face templates of individuals who appear in newly-edited videos or photos with the facial templates already saved in Magisto’s face database.”

The complaint also asserts that Magisto analyzes and face-matches the biometrics of non-Magisto users who happen to appear in the photos and videos, which is a violation of BIPA.

How the FBI targeted environmental activists in domestic terror investigations

The investigations, which targeted individual activists and some environmental organizations, were opened in 2013-2014, at the height of opposition to the Keystone XL Pipeline and the expansion of fossil fuel production in North America.

The new FOIA documents reveal the bureau’s motivation for investigating a broad cross section of the environmental movement and its characterization of non-violent protesters as a potential threat to national security.

In 2010, the DOJ’s inspector general criticized the FBI for using non-violent civil disobedience as grounds to open domestic terrorism investigations. US citizens swept up in such investigations can be placed on terrorism watchlists and subjected to surveillance and restrictions on international travel. The designation can also lead local law enforcement to take a more confrontational approach when engaging with non-violent activists.

Google Loans Cameras To Volunteers To Fill Gaps in ‘Street View’

Kanhema, who works as a product manager in Silicon Valley and is a freelance photographer in his spare time, volunteered to carry Google’s Street View gear to map what amounted to 2,000 miles of his home country. The Berkeley, Calif., resident has filled in the map of other areas in Africa and Canada as well.

“We start in the large metropolitan areas where we know we have users, where it’s easy for us to drive and we can execute quickly,” says Stafford Marquardt, a product manager for Street View.

He says the team is working to expand the service’s reach. To do that, Google often relies on volunteers who can either borrow the company’s camera equipment or take photos using their own. Most images on Street View are collected by drivers, and most of these drivers are employed by third parties that work with Google. But when it comes to the places Google hasn’t prioritized, people like Kanhema can fill in the gaps.

“It’s so conspicuous to have a 4-foot contraption attached to the roof of your car,” Kanhema says. “People are walking up and asking questions about, ‘Is that a camera? What are you recording? What are you filming? It is for Google Maps? Will my house be on the map? Will my face be on the map?'”

Google doesn’t pay him or the other volunteers — whom the company calls “contributors” — for the content they upload. Kanhema, for example, spent around $5,000 of his own money to travel across Zimbabwe for the project.

Google currently has no plans to compensate its volunteers, adding that it pays contributors “in a lot of other ways” by offering “a platform to host gigabytes and terabytes of imagery and publish it to the entire world, absolutely for free.”

The 120 Most CCTV Surveilled Cities In the World

Comparitech.com has published a report and spreadsheet laying out how many CCTV cameras are in operation in 120 different cities around the world, and data for the crime rates in these cities. The report notes “We found little correlation between the number of public CCTV cameras and crime or safety.”

8 of the 10 most surveilled cities are in China, even though London and Atlana also make the cut, and the report says that — depending on what numbers you believe — China will have between 200 Million and 626 Million CCTV cameras, or possibly even more, in operation by 2020. That would be almost 1 CCTV camera per 2 citizens in the country, and the number could go up.

Outside of China, the top most-surveilled cities in the world are:

London – 68.40 cameras per 1,000 people
Atlanta – 15.56 cameras per 1,000 people
Singapore – 15.25 cameras per 1,000 people
Abu Dhabi – 13.77 cameras per 1,000 people
Chicago – 13.06 cameras per 1,000 people
Sydney – 12.35 cameras per 1,000 people
Baghdad – 12.30 cameras per 1,000 people
Dubai – 12.14 cameras per 1,000 people
Moscow – 11.70 cameras per 1,000 people
Berlin – 11.18 cameras per 1,000 people
New Delhi – 9.62 cameras per 1,000 people

Ex-Google Engineer Says That Robot Weapons May Cause Accidental Mass Killings

“A former Google engineer who worked on the company’s infamous military drone project has sounded a warning against the building of killer robots,” reports Business Insider.

Laura Nolan had been working at Google four years when she was recruited to its collaboration with the US Department of Defense, known as Project Maven, in 2017, according to the Guardian. Project Maven was focused on using AI to enhance military drones, building AI systems which would be able to single out enemy targets and distinguish between people and objects. Google canned Project Maven after employee outrage, with thousands of employees signing a petition against the project and about a dozen quitting in protest. Google allowed the contract to lapse in March this year. Nolan herself resigned after she became “increasingly ethically concerned” about the project, she said…

Nolan fears that the next step beyond AI-enabled weapons like drones could be fully autonomous AI weapons. “What you are looking at are possible atrocities and unlawful killings even under laws of warfare, especially if hundreds or thousands of these machines are deployed,” she said…. Although no country has yet come forward to say it’s working on fully autonomous robot weapons, many are building more and more sophisticated AI to integrate into their militaries. The US navy has a self-piloting warship, capable of spending months at sea with no crew, and Israel boasts of having drones capable of identifying and attacking targets autonomously — although at the moment they require a human middle-man to give the go-ahead.

Nolan is urging countries to declare an outright ban on autonomous killing robots, similar to conventions around the use of chemical weapons.

Social media stress can lead to social media addiction

Social networking sites (SNS) such as Facebook and Instagram are known to cause stress in users, known as technostress from social media. However, when faced with such stress, instead of switching off or using them less, people are moving from one aspect of the social media platforms to another — escaping the causes of their stress without leaving the medium on which it originated.

Research into the habits of 444 Facebook users revealed they would switch between activities such as chatting to friends, scanning news feeds and posting updates as each began to cause stress. This leads to an increased likelihood of technology addiction, as they use the various elements of the platform over a greater timespan.

Researchers from Lancaster University, the University of Bamberg and Friedrich-Alexander Univeristät Erlangen-Nürnberg, writing in Information Systems Journal, found that users were seeking distraction and diversion within the Facebook platform as a coping mechanism for stress caused by the same platform, rather than switching off and undertaking a different activity.

Facial Recognition Deployed on Children at Hundreds of US Summer Camps

The Washington Post describes a parent whose phone “rings 10 times a day with notifications from the summer camp’s facial-recognition service, which alerts him whenever one of his girls is photographed enjoying their newfound independence.”

You can also call your kid if you think they look unhappy or if you are unsatisfied with them in any way and nag them. So kids mob photographers with big, fake smiles and beg to be photographed so their parents won’t harass them.

The companies have “privacy policies” that grossly overreach, giving them perpetual licenses to distribute all the photos they take forever, for any purpose. They claim to have super-secure data-centers, but won’t describe what makes them so sure their data centers are more secure than, say, the NSA’s, Equifax, or any of the other “super secure” data centers that have been breached and dumped in recent memory.

And while parents enjoy all this looking at their kids while they’re away in theory, they also report a kind of free-floating anxiety because they know just enough about their kids’ lives at camp to worry, but not enough to assuage their worries.

One overseer of two camps tells the Post that more concerned parents call her in two hours than used to call in an entire month. One company adds that their service is now being used by over 160,000 parents — and for children as young as six.

At least one camp takes over 1,000 photos each day — scanning each one with facial recognition technology — and the Post reports that facial-recognition technology has now already been deployed at “hundreds” of summer camps all across the United States.

Most camp directors said they appreciate that the photos can bring peace of mind to lonely parents worried about their kids’ first faraway solo trip. But the photos can also end up perpetuating a cycle of parental anxiety: The more photos the camp posts, the more the parents seem to want – and the more questions they’ll ask about their kids.

When a camper isn’t smiling or is on the outside of a big group shot, counselors said they know to expect a phone call from back home. Liz Young, a longtime camp director now helping oversee two camps on the coast of New Hampshire’s Lake Winnipesaukee, said she now fields as many concerned-parents calls in two hours as she used to get all month – mostly from parents asking about how their kids look on camera, or whether they’re being photographed enough.

One camp, Lake of the Woods and Greenwoods in rural Decatur, Michigan, has four photographers and a social-media director on staff to help push nearly constant updates onto Bunk1, Facebook and Instagram, where recent photos of kids jumping into a lake or firing bows and arrows have netted hundreds of comments and “likes.” The facial-recognition system is in its second summer at the camp, and roughly half of all parents of its campers have signed up.

Some of the kids, Hardin said, are so accustomed to constant photography that they barely notice the camera crew. It’s the parents, she said, who struggle with the distance – and who are desperate for the reassurance the facial-recognition systems provide.

Some parents race to share the photos on social media as a way to curate their kids’ childhood and offer visual evidence that their family is worth envying.

The photos could inflame new tensions for kids hitting the age – generally, in the pre- and early teens – when they can start to feel awkward about all the photos their parents post. But they can also foster unease for kids questioning how much of their emotions and internal lives they’re comfortable sharing in every moment, even when they’re far from home.

“There’s the contradiction of these really old-fashioned summer camps with no electricity in the cabins, no cellphones . . . but the parents can check in daily to look at the expressions on their kids’ faces,” she added. “Part of childhood development is: It isn’t always 100 percent smiling.”

America’s Elderly Seem More Screen-Obsessed Than the Young

Many parents and grandparents will grumble about today’s screen-obsessed youth. Indeed, researchers find that millennials look at their phones more than 150 times a day; half of them check their devices in the middle of the night; a third glance at them immediately after waking up. And yet, when all screens are accounted for, it is in fact older folk who seem most addicted. According to Nielsen, a market-research firm, Americans aged 65 and over spend nearly ten hours a day consuming media on their televisions, computers and smartphones. That is 12% more than Americans aged 35 to 49, and a third more than those aged 18 to 34 (the youngest cohort for whom Nielsen has data).

American seniors “spend an average of seven hours and 30 minutes in front of the box, about as much as they did in 2015,” the report says. “The spend another two hours staring at their smartphones, a more than seven-fold increase from four years ago.”

Millennials have increased the time they spend on their mobile devices, but it’s been largely offset by their dwindling interest in TV. As for teenagers, a report from 2015 by Common Sense Media “found that American teens aged 13-18 spent about six hours and 40 minutes per day on screens: slightly more than Nielsen recorded for 18- to 34-year-olds that year, but less than older generations.”

FBI seeks to monitor Facebook, oversee mass social media data collection

The FBI is planning to aggressively harvest information from Facebook and Twitter, a move which is likely to cause a clash between the agency and social media platforms.

As reported by the Wall Street Journal, the FBI has recently sought proposals from third-party vendors for technological solutions able to harvest publicly-available information in bulk from Facebook, Twitter, and other social media outlets.

Law enforcement has requested the means to “obtain the full social media profile of persons-of-interest and their affiliation to any organization or groups,” to keep track of users based on their neighborhood, and keyword searches, among other tool functions.

While the FBI believes that such tools can work in harmony with privacy safeguards and civil liberties, the mass collection of names, photos, and IDs — when combined with information from other sources — may do just the opposite.

Back in July, for example, there was a public outcry after it was discovered that the FBI and Immigration and Customs Enforcement (ICE) were plundering databases belonging to the DMV for surveillance and investigative purposes.

Facebook Paid Contractors to Transcribe Users’ Audio Chats

Facebook Inc. has been paying hundreds of outside contractors to transcribe clips of audio from users of its services, according to people with knowledge of the work.

The work has rattled the contract employees, who are not told where the audio was recorded or how it was obtained — only to transcribe it, said the people, who requested anonymity for fear of losing their jobs. They’re hearing Facebook users’ conversations, sometimes with vulgar content, but do not know why Facebook needs them transcribed, the people said.

Facebook confirmed that it had been transcribing users’ audio and said it will no longer do so, following scrutiny into other companies. “Much like Apple and Google, we paused human review of audio more than a week ago,” the company said Tuesday. The company said the users who were affected chose the option in Facebook’s Messenger app to have their voice chats transcribed. The contractors were checking whether Facebook’s artificial intelligence correctly interpreted the messages, which were anonymized.

Big tech companies including Amazon.com Inc. and Apple Inc. have come under fire for collecting audio snippets from consumer computing devices and subjecting those clips to human review, a practice that critics say invades privacy. Bloomberg first reported in April that Amazon had a team of thousands of workers around the world listening to Alexa audio requests with the goal of improving the software, and that similar human review was used for Apple’s Siri and Alphabet Inc.’s Google Assistant. Apple and Google have since said they no longer engage in the practice and Amazon said it will let users opt out of human review.

The social networking giant, which just completed a $5 billion settlement with the U.S. Federal Trade Commission after a probe of its privacy practices, has long denied that it collects audio from users to inform ads or help determine what people see in their news feeds. Chief Executive Officer Mark Zuckerberg denied the idea directly in Congressional testimony.

In follow-up answers for Congress, the company said it “only accesses users’ microphone if the user has given our app permission and if they are actively using a specific feature that requires audio (like voice messaging features.)” The Menlo Park, California-based company doesn’t address what happens to the audio afterward.

Pentagon testing mass surveillance balloons across the US

The US military is conducting wide-area surveillance tests across six midwest states using experimental high-altitude balloons, documents filed with the Federal Communications Commission (FCC) reveal.

Up to 25 unmanned solar-powered balloons are being launched from rural South Dakota and drifting 250 miles through an area spanning portions of Minnesota, Iowa, Wisconsin and Missouri, before concluding in central Illinois.

Travelling in the stratosphere at altitudes of up to 65,000ft, the balloons are intended to “provide a persistent surveillance system to locate and deter narcotic trafficking and homeland security threats”, according to a filing made on behalf of the Sierra Nevada Corporation, an aerospace and defence company.

The balloons are carrying hi-tech radars designed to simultaneously track many individual vehicles day or night, through any kind of weather.

A rival balloon operator World View recently announced that it had carried out multi-week test missions in which its own stratospheric balloons were able to hover over a five-mile-diameter area for six and a half hours, and larger areas for days at a time.

Ryan Hartman, CEO of World View, said that World View had also completed a dozen surveillance test missions for a customer it would not name, capturing data he would not specify.

“Obviously, there are laws to protect people’s privacy and we are respectful of all those laws,” Hartman said. “We also understand the importance of operating in an ethical way as it relates to further protecting people’s privacy.”

You’re very easy to track down, even when your data has been anonymized

The most common way public agencies protect our identities is anonymization. This involves stripping out obviously identifiable things such as names, phone numbers, email addresses, and so on. Data sets are also altered to be less precise, columns in spreadsheets are removed, and “noise” is introduced to the data. Privacy policies reassure us that this means there’s no risk we could be tracked down in the database. However, a new study in Nature Communications suggests this is far from the case. Researchers from Imperial College London and the University of Louvain have created a machine-learning model that estimates exactly how easy individuals are to reidentify from an anonymized data set. You can check your own score here, by entering your zip code, gender, and date of birth.

On average, in the U.S., using those three records, you could be correctly located in an “anonymized” database 81% of the time. Given 15 demographic attributes of someone living in Massachusetts, there’s a 99.98% chance you could find that person in any anonymized database. The tool was created by assembling a database of 210 different data sets from five sources, including the U.S. Census. The researchers fed this data into a machine-learning model, which learned which combinations are more nearly unique and which are less so, and then assigns the probability of correct identification.

How America’s Tech Giants Are Helping Build China’s Surveillance State

The OpenPower Foundation — a nonprofit led by Google and IBM executives with the aim of trying to “drive innovation” — has set up a collaboration between IBM, Chinese company Semptian, and U.S. chip manufacturer Xilinx. Together, they have worked to advance a breed of microprocessors that enable computers to analyze vast amounts of data more efficiently. Shenzhen-based Semptian is using the devices to enhance the capabilities of internet surveillance and censorship technology it provides to human rights-abusing security agencies in China, according to sources and documents. A company employee said that its technology is being used to covertly monitor the internet activity of 200 million people…

Semptian presents itself publicly as a “big data” analysis company that works with internet providers and educational institutes. However, a substantial portion of the Chinese firm’s business is in fact generated through a front company named iNext, which sells the internet surveillance and censorship tools to governments. iNext operates out of the same offices in China as Semptian, with both companies on the eighth floor of a tower in Shenzhen’s busy Nanshan District. Semptian and iNext also share the same 200 employees and the same founder, Chen Longsen. [The company’s] Aegis equipment has been placed within China’s phone and internet networks, enabling the country’s government to secretly collect people’s email records, phone calls, text messages, cellphone locations, and web browsing histories, according to two sources familiar with Semptian’s work.

Promotional documents obtained from the company promise “location information for everyone in the country.” One company representative even told the Intercept they were processing “thousands of terabits per second,” and — not knowing they were talking to a reporter — forwarded a 16-minute video detailing their technology. “If a government operative enters a person’s cellphone number, Aegis can show where the device has been over a given period of time: the last three days, the last week, the last month, or longer,” the Intercept reports.

Can Google Ads Change Your Beliefs?

There are advertisers in the digital marketing industry who want to find out how effective this new form of social engineering is. One of those advertisers is Google. Redirect Method was a Google-incubated project that used redirect ads to deradicalize would-be extremists. In the first eight weeks of 2016, some 320,000 people — all of whom were believed to harbor sympathy toward the Islamic State — clicked on ads designed to reflect an interest in extremist content. Instead of arriving at a page that supported their views, Islamic State sympathizers who clicked the ads found themselves directed to a playlist of videos debunking the terror group’s recruitment narratives. Most of the visitors stuck around. Together, they watched more than half a million minutes of video.

After the ISIS campaign ended, Google left behind a blueprint.

The blueprint shows, step by step, how you can create your own redirect ads to sway any belief or opinion — held by any Google user, anywhere in the world — of your choice. You don’t have to be a marketer with years of experience to do this. You just need to follow the instructions and put up a credit card (a few hundred bucks will suffice). Recently, I followed the blueprint and created a redirect campaign of my own… The goal of my first redirect campaign was to sway the ideology of suicidal people.

Nearly one in three apparently suicidal searchers who viewed his ad then dialed his hotline number — which then forwarded the calls to the National Suicide Prevention Lifeline. But he expressed surprise that Google “let me run the ads with no issue… I didn’t need qualifications to be a conduit of peoples’ fates.” He later tried creating another campaign for prospective school shooters, “but the conversion rates were low. These prospective shooters were reluctant to speak with someone.”

Yet one study found that more than 50% of people using Google couldn’t distinguish between ads and “organic” results on Google.

Amazon Alexa Keeps Your Data With No Expiration Date, and Shares It Too

If you have hangups about Amazon and privacy on its smart assistant, Alexa, you’re not alone. Even after Amazon sent answers to a US senator who had questions about how the tech giant retains voice data and transcripts, the lawmaker remains concerned about Alexa’s privacy practices. From a report:

Sen. Chris Coons, a Democrat from Delaware, sent a letter to Amazon CEO Jeff Bezos in May, demanding answers on Alexa and how long it kept voice recordings and transcripts, as well as what the data gets used for. The letter came after CNET’s report that Amazon kept transcripts of interactions with Alexa, even after people deleted the voice recordings. The deadline for answers was June 30, and Amazon’s vice president of public policy, Brian Huseman, sent a response on June 28. In the letter, Huseman tells Coons that Amazon keeps transcripts and voice recordings indefinitely, and only removes them if they’re manually deleted by users. Huseman also noted that Amazon had an “ongoing effort to ensure those transcripts do not remain in any of Alexa’s other storage systems.” But there are still records from some conversations with Alexa that Amazon won’t delete, even if people remove the audio, the letter revealed.

A Researcher Attempted To Opt Out of Facial Recognition at the Airport — It Wasn’t Easy

The announcement came as we began to board. Last month, I was at Detroit’s Metro Airport for a connecting flight to Southeast Asia. I listened as a Delta Air Lines staff member informed passengers that the boarding process would use facial recognition instead of passport scanners. As a privacy-conscious person, I was uncomfortable boarding this way. I also knew I could opt out. Presumably, most of my fellow fliers did not: I didn’t hear a single announcement alerting passengers how to avoid the face scanners.

To figure out how to do so, I had to leave the boarding line, speak with a Delta representative at their information desk, get back in line, then request a passport scan when it was my turn to board. Federal agencies and airlines claim that facial recognition is an opt-out system, but my recent experience suggests they are incentivizing travelers to have their faces scanned — and disincentivizing them to sidestep the tech — by not clearly communicating alternative options. Last year, a Delta customer service representative reported that only 2 percent of customers opt out of facial-recognition. It’s easy to see why.

Smartphones and Fitness Trackers Are Being Used To Gauge Employee Performance

The passive system incorporates an app known as PhoneAgent, which was developed by Prof. Andrew Campbell at New Hampshire’s Dartmouth College. Using the smartphone’s own sensors, that app continuously monitors factors such as the worker’s phone usage, physical activity level, geographical location, and the ambient light levels of their environment. PhoneAgent is also Bluetooth-linked to a fitness bracelet worn by the employee, which transmits data including their heart functions, sleep quality, stress levels, and calorie consumption. Additionally, Bluetooth locational beacons in the person’s home and workplace monitor how much time they spend at each place, and how often they leave their workstation.

All of the phone, bracelet and beacon data is transmitted to a cloud-based server, where it’s processed via machine-learning algorithms that were “trained” on the habits of people already known to be high- or low-level performers. When tested on 750 workers across the U.S. over a one-year period, the system was reportedly able to distinguish between individuals’ performance levels (in a variety of industries) with an accuracy of 80 percent. That number should rise as the system is developed further.