Facebook Contractors Categorize Your Private Posts To Train AI

Facebook uses thousands of third-party staffers around the world to look at Facebook and Instagram posts to help train its AI and to inform new products. “But because the contractors see users’ public and private posts, some view it as a violation of privacy.”

According to Reuters, as many as 260 contract workers in Hyderabad, India have spent more than a year labeling millions of Facebook posts dating back to 2014. They look for the subject of the post, the occasion and the author’s intent, and Facebook told Reuters, the company uses that information to develop new features and to potentially increase usage and ad revenue. Around the globe, Facebook has as many as 200 similar content labeling projects, many of which are used to train the company’s AI.

The contractors working in Hyderabad told Reuters they see everything from text-based status updates to videos, photos and Stories across Facebook and Instagram — including those that are shared privately. And even as Facebook embarks on its “the future is private” platform, one Facebook employee told Reuters he can’t imagine the practice going away. It’s a core part of training AI and developing the company’s products.

‘The goal is to automate us’: welcome to the age of surveillance capitalism

The behaviour of the digital giants looks rather different from the roseate hallucinations of Wired magazine. What one sees instead is a colonising ruthlessness of which John D Rockefeller would have been proud. First of all there was the arrogant appropriation of users’ behavioural data – viewed as a free resource, there for the taking. Then the use of patented methods to extract or infer data even when users had explicitly denied permission, followed by the use of technologies that were opaque by design and fostered user ignorance.

And, of course, there is also the fact that the entire project was conducted in what was effectively lawless – or at any rate law-free – territory. Thus Google decided that it would digitise and store every book ever printed, regardless of copyright issues. Or that it would photograph every street and house on the planet without asking anyone’s permission. Facebook launched its infamous “beacons”, which reported a user’s online activities and published them to others’ news feeds without the knowledge of the user. And so on, in accordance with the disrupter’s mantra that “it is easier to ask for forgiveness than for permission”.

The combination of state surveillance and its capitalist counterpart means that digital technology is separating the citizens in all societies into two groups: the watchers (invisible, unknown and unaccountable) and the watched. This has profound consequences for democracy because asymmetry of knowledge translates into asymmetries of power.

Dutch Government Report Says Microsoft Office Telemetry Collection Breaks EU GDPR Laws

Microsoft broke Euro privacy rules by carrying out the “large scale and covert” gathering of private data through its Office apps, according to a report commissioned by the Dutch government.

It was found that Microsoft was collecting telemetry and other content from its Office applications, including email titles and sentences where translation or spellchecker was used, and secretly storing the data on systems in the United States.

Those actions break Europe’s new GDPR privacy safeguards, it is claimed, and may put Microsoft on the hook for potentially tens of millions of dollars in fines. The Dutch authorities are working with the corporation to fix the situation, and are using the threat of a fine as a stick to make it happen.

The investigation was jumpstarted by the fact that Microsoft doesn’t publicly reveal what information it gathers on users and doesn’t provide an option for turning off diagnostic and telemetry data sent by its Office software to the company as a way of monitoring how well it is functioning and identifying any software issues.

Only 22% of Americans Now Trust Facebook’s Handling of Personal Info

Facebook is the least trustworthy of all major tech companies when it comes to safeguarding user data, according to a new national poll conducted for Fortune, highlighting the major challenges the company faces following a series of recent privacy blunders. Only 22% of Americans said that they trust Facebook with their personal information, far less than Amazon (49%), Google (41%), Microsoft (40%), and Apple (39%).

In question after question, respondents ranked the company last in terms of leadership, ethics, trust, and image… Public mistrust extended to Zuckerberg, Facebook’s public face during its privacy crisis and who once said that Facebook has “a responsibility to protect your information, If we can’t, we don’t deserve it.” The company subsequently fell victim to a hack but continued operating as usual, including debuting a video-conferencing device intended to be used in people’s living rooms or kitchens and that further extends Facebook’s reach into more areas outside of personal computers and smartphones. Only 59% of respondents said they were “at least somewhat confident” in Zuckerberg’s leadership in the ethical use of data and privacy information, ranking him last among four other tech CEOS…

As for Facebook, the social networking giant may have a difficult time regaining public trust because of its repeated problems. Consumers are more likely to forgive a company if they believe a problem was an aberration rather than a systemic failure by its leadership, Harris Poll CEO John Gerzema said.

The article concludes that “For now, the public isn’t in a forgiving mood when it comes to Facebook and Zuckerberg.”

Google Exposed Private Data of Hundreds of Thousands of Google+ Users and Then Opted Not To Disclose The Breach

Google exposed the private data of hundreds of thousands of users of the Google+ social network and then opted not to disclose the issue this past spring, in part because of fears that doing so would draw regulatory scrutiny and cause reputational damage. As part of its response to the incident, the Alphabet unit plans to announce a sweeping set of data privacy measures that include permanently shutting down all consumer functionality of Google+, the people said. The move effectively puts the final nail in the coffin of a product that was launched in 2011 to challenge Facebook and is widely seen as one of Google’s biggest failures.

A software glitch in the social site gave outside developers potential access to private Google+ profile data between 2015 and March 2018, when internal investigators discovered and fixed the issue, according to the documents and people briefed on the incident. A memo reviewed by the Journal prepared by Google’s legal and policy staff and shared with senior executives warned that disclosing the incident would likely trigger “immediate regulatory interest” and invite comparisons to Facebook’s leak of user information to data firm Cambridge Analytica.

Instagram is testing the ability to share your precise location history with Facebook

Revealed just weeks after Instagram’s co-founders left the company, Instagram is currently testing a feature that would allow it to share your location data with Facebook, even when you’re not using the app.

Instagram is not the only service that Facebook has sought to share data between. Back in 2016 the company announced that it would be sharing user data between WhatsApp and Facebook in order to offer better friend suggestions. The practice was later halted in the European Union thanks to its GDPR legislation, although WhatsApp’s CEO and co-founder later left over data privacy concerns.

Facebook is also reportedly testing a map view to see friend’s locations, similar to what’s already offered by Snapchat. Instagram’s data sharing could provide additional data points to power this functionality, while providing Facebook with more data to better target its ads.

Proposed Toronto development from Google’s Sidewalk Labs sparks concerns over data

Heated streets will melt ice and snow on contact. Sensors will monitor traffic and protect pedestrians. Driverless shuttles will carry people to their doors.

A unit of Google’s parent company Alphabet is proposing to turn a rundown part of Toronto’s waterfront into what may be the most wired community in history — to “fundamentally refine what urban life can be.”

Dan Doctoroff, the CEO of Sidewalk Labs, envisions features like pavement that lights up to warn pedestrians of approaching streetcars. Flexible heated enclosures — described as “raincoats” for buildings — will be deployed based on weather data during Toronto’s bitter winters. Robotic waste-sorting systems will detect when a garbage bin is full and remove it before raccoons descend.

“Those are great uses of data that can improve the quality of life of people,′ he said. “That’s what we want to do.”

But some Canadians are rethinking the privacy implications of giving one of the most data-hungry companies on the planet the means to wire up everything from street lights to pavement.

The concerns have intensified following a series of privacy scandals at Facebook and Google. A recent Associated Press investigation found that many Google services on iPhones and Android devices store location-tracking data even if you use privacy settings that are supposed to turn them off.

Adam Vaughan, the federal lawmaker whose district includes the development, said debate about big data and urban infrastructure is coming to cities across the world and he would rather have Toronto at the forefront of discussion.

“Google is ahead of governments globally and locally. That’s a cause for concern but it’s also an opportunity,” Vaughan said.

GCHQ mass surveillance violated human rights, court rules

GCHQ’s methods in carrying out bulk interception of online communications violated privacy and failed to provide sufficient surveillance safeguards, the European court of human rights (ECHR) has ruled in a test case judgment.

But the court found that GCHQ’s regime for sharing sensitive digital intelligence with foreign governments was not illegal.

It is the first major challenge to the legality of UK intelligence agencies intercepting private communications in bulk, following Edward Snowden’s whistleblowing revelations. The long-awaited ruling is one of the most comprehensive assessments by the ECHR of the legality of the interception operations operated by UK intelligence agencies.

The case was brought by a coalition of 14 human rights groups, privacy organisations and journalists, including Amnesty International, Liberty, Privacy International and Big Brother Watch. In a statement, published on Amnesty’s website, Lucy Claridge, Amnesty International’s Strategic Litigation Director, said, today’s ruling “represents a significant step forward in the protection of privacy and freedom of expression worldwide. It sends a strong message to the UK Government that its use of extensive surveillance powers is abusive and runs against the very principles that it claims to be defending.” He added: “This is particularly important because of the threat that Government surveillance poses to those who work in human rights and investigative journalism, people who often risk their own lives to speak out. Three years ago, this same case forced the UK Government to admit GCHQ had been spying on Amnesty — a clear sign that our work and the people we work alongside had been put at risk.”

The judges considered three aspects of digital surveillance: bulk interception of communications, intelligence sharing and obtaining of communications data from communications service providers. By a majority of five to two votes, the Strasbourg judges found that GCHQ’s bulk interception regime violated article 8 of the European convention on human rights, which guarantees privacy, because there were said to be insufficient safeguards, and rules governing the selection of “related communications data” were deemed to be inadequate.

Banks and Retailers Are Tracking How You Type, Swipe and Tap

When you’re browsing a website and the mouse cursor disappears, it might be a computer glitch — or it might be a deliberate test to find out who you are.

The way you press, scroll and type on a phone screen or keyboard can be as unique as your fingerprints or facial features. To fight fraud, a growing number of banks and merchants are tracking visitors’ physical movements as they use websites and apps.

The data collection is invisible to those being watched. Using sensors in your phone or code on websites, companies can gather thousands of data points, known as “behavioral biometrics.”
 


A phone’s touchscreen sensors can track where and how you swipe your device to help determine who you are.

 


The angle at which you hold your device is one of the many biometric markers that can be measured.

 

Behavioral monitoring software churns through thousands of elements to calculate a probability-based guess about whether a person is who they claim. Two major advances have fed its growing use: the availability of cheap computing power and the sophisticated array of sensors now built into most smartphones.

The system’s unobtrusiveness is part of its appeal, Mr. Hanley said. Traditional physical biometrics, like fingerprints or irises, require special scanning hardware for authentication. But behavioral traits can be captured in the background, without customers doing anything to sign up.

BioCatch occasionally tries to elicit a reaction. It can speed up the selection wheel you use to enter data like dates and times on your phone, or make your mouse cursor disappear for a fraction of a second.

“Everyone reacts a little differently to that,” said Frances Zelazny, BioCatch’s chief strategy and marketing officer. “Some people move the mouse side to side; some people move it up and down. Some bang on the keyboard.”

Because your reaction is so individual, it’s hard for a fraudulent user to fake. And because customers never know the monitoring technology is there, it doesn’t impose the kind of visible, and irritating, roadblocks that typically accompany security tests. You don’t need to press your thumb on your phone’s fingerprint reader or type in an authentication code.
 


Biometric software can also determine the pressure you tend to apply to your phone when you tap and type.

“We don’t have to sit people down in a room and get them to type under perfect laboratory conditions,” said Neil Costigan, the chief executive of BehavioSec, a Palo Alto, Calif., company that makes software used by many Nordic banks. “You just watch them, silently, while they go about their normal account activities.”

Why the Facebook ‘scandal’ impacts you more than you think

It’s not just the data you choose to share.

By now we all know the story: Facebook allowed apps on its social media platform which enabled a shady outfit called Cambridge Analytica to scrape the profiles of 87 million users, in order to serve up targeted ads to benefit the Trump election campaign in 2016.  More than 300,000 Australian users of Facebook were caught up in the data harvesting.

But serving up ads in a foreign election campaign is not the whole story.  Facebook, and other companies involved in data mining, are invading our privacy and harming us economically and socially, in ways that are only just starting to become clear.

And it’s not just the data you choose to share. The information you post is not the whole story.  It’s only the tip of the iceberg of data that Facebook has collected about you.

Every time you go online you leave a trail of digital breadcrumbs.  Facebook has been busily sweeping up those breadcrumbs, and using them to categorise and profile you.  Facebook obviously knows when you click on a Facebook ‘like’ button; but also, unless a web developer has gone out of their way to find tools to block them (as we have done for our Salinger Privacy blog), Facebook knows every time you simply look at a website that has a Facebook ‘like’ button somewhere on it.

So if you only post or ‘like’ stories about inspirational mountain climbers and funny cat videos, but also do things online that you don’t share with your family, friends or work colleagues (like looking at stories about abortion or dealing with infidelity, Googling how to manage anxiety or erectile dysfunction, whingeing about your employer in a chatroom, or spending hours reviewing dating profiles, gambling or shopping obsessively for shoes)  — Facebook has you pegged anyway.

Plus, Facebook obtains data from other sources which know about your offline purchases, to build an even richer picture of who you really are.  And of course, Facebook may have access to your address book, your location history, the contents of your private messages, and depending on your brand of phone, possibly even a history of your phone calls and text messages.

All that information is used to draw inferences and assumptions about your preferences, and predict your likely behaviour.  The results are then used to categorise, profile and ultimately target you, in a process usually described as ‘online behavioural advertising’.

It’s not ‘just ads’

The objective of online behavioural advertising is to predict your purchasing interests and drive a purchase decision.  So far, the same as any other advertising.  But online, the implications for us as individuals are much greater.

Facebook’s promise to advertisers is that it can show their ad to exactly who the advertiser wants, and exclude everybody else.

However, by allowing exclusion, the platform also allows discrimination.  Facebook has been caught allowing advertisers to target — and exclude — people on the basis of their ‘ethnic affinity’, amongst other social, demographic, racial and religious characteristics.  So a landlord with an ad for rental housing could prevent people profiled as ‘single mothers’ from ever seeing their ad.  An employer could prevent people identifying as Jewish from seeing a job ad.  A bank could prevent people categorised as African Americans from seeing an ad for a home loan.

Existing patterns of social exclusion, economic inequality and discrimination are further entrenched by micro-targeted advertising, which is hidden from public view and regulatory scrutiny.

Data boy. Mark Zuckerberg testifies in Washington. Image: Getty.

Predictive analytics can narrow or alter your life choices

Once we move beyond straight-up advertising and into predictive analytics, the impact on individual autonomy becomes more acute.  Big Data feeds machine learning, which finds patterns in the data, from which new rules (algorithms) are designed.  Algorithms predict how a person will behave, and suggest how they should be treated.

Algorithms can lead to price discrimination, like surge pricing based on Uber knowing how much phone battery life you have left.  Or market exclusion, like Woolworths only offering car insurance to customers it has decided are low risk, based on an assessment of the groceries they buy.

Banks have been predicting the risk of a borrower defaulting on a loan for decades, but now algorithms are also used to determine who to hire, predict when a customer is pregnant, and deliver targeted search results to influence how you vote.

Algorithms are also being used to predict the students at risk of failure, the prisoners at risk of re-offending, and who is at risk of suicide and then launching interventions accordingly.  However, even leaving aside the accuracy of those predictions, interventions are not necessarily well-intentioned.  It was revealed last year that Australian Facebook executives were touting to advertisers their ability to target psychologically vulnerable teenagers. 

Automated decision-making diminishes our autonomy, by narrowing or altering our market and life choices, in ways that are not clear to us.  People already in a position of economic or social disadvantage face the additional challenge of trying to disprove or beat an invisible algorithm.

In a predictive and pre-emptive world, empathy, forgiveness, rehabilitation, redemption, individual dignity, autonomy and free will are programmed out of our society.

Fiddling with users’ privacy settings on Facebook won’t fix anything.  If we want our lives to be ruled by human values and individual dignity, instead of by machines fed on questionable data, we need robust, enforced and globally effective privacy laws.

A new European privacy law commences later this month.  The obligations include that businesses and governments must offer understandable explanations of how their algorithms work, and allow people to seek human review of automated decision-making.  This is a step in the right direction, which Australia, the US and the rest of the world should follow.

Google hasn’t stopped reading your e-mails

If you’re a Gmail user, your messages and emails likely aren’t as private as you’d think. Google reads each and every one (even if you definitely don’t), scanning your painfully long email chains and vacation responders in order to collect more data on you. Google uses the data gleaned from your messages in order to inform a whole host of other products and services, NBC News reported Thursday.

Though Google announced that it would stop using consumer Gmail content for ad personalization last July, the language permitting it to do so is still included in its current privacy policy, and it without a doubt still scans users emails for other purposes. Aaron Stein, a Google spokesperson, told NBC that Google also automatically extracts keyword data from users’ Gmail accounts, which is then fed into machine learning programs and other products within the Google family. Stein told NBC that Google also “may analyze [email] content to customize search results, better detect spam and malware,” a practice the company first announced back in 2012.

“We collect information about the services that you use and how you use them…” says Google’s privacy policy. “This includes information like your usage data and preferences, Gmail messages, G+ profile, photos, videos, browsing history, map searches, docs, or other Google-hosted content. Our automated systems analyze this information as it is sent and received and when it is stored.”

While Google doesn’t sell this information to third parties, has used it to power its own advertising network and inform search results, among other things. And this is far from a closely guarded secret. The company has included disclosures relating to these practices in its privacy policy since at least 2012: “When you share information with us, for example by creating a Google Account, we can make those services even better – to show you more relevant search results and ads…,” says Google’s March 2012 privacy policy.

Stare Into The Lights My Pretties

You may be sick of worrying about online privacy, but ‘surveillance apathy’ is also a problem

Siobhan Lyons, Scholar in Media and Cultural Studies, Macquarie University, writes in The Conversation:

We all seem worried about privacy. Though it’s not only privacy itself we should be concerned about: it’s also our attitudes towards privacy that are important.

When we stop caring about our digital privacy, we witness surveillance apathy.

And it’s something that may be particularly significant for marginalised communities, who feel they hold no power to navigate or negotiate fair use of digital technologies.

In the wake of the NSA leaks in 2013 led by Edward Snowden, we are more aware of the machinations of online companies such as Facebook and Google. Yet research shows some of us are apathetic when it comes to online surveillance.

Privacy and surveillance

Attitudes to privacy and surveillance in Australia are complex.

According to a major 2017 privacy survey, around 70% of us are more concerned about privacy than we were five years ago.

And yet we still increasingly embrace online activities. A 2017 report on social media conducted by search marketing firm Sensis showed that almost 80% of internet users in Australia now have a social media profile, an increase of around ten points from 2016. The data also showed that Australians are on their accounts more frequently than ever before.

Also, most Australians appear not to be concerned about recently proposed implementation of facial recognition technology. Only around one in three (32% of 1,486) respondents to a Roy Morgan study expressed worries about having their faces available on a mass database.

A recent ANU poll revealed a similar sentiment, with recent data retention laws supported by two thirds of Australians.

So while we’re aware of the issues with surveillance, we aren’t necessarily doing anything about it, or we’re prepared to make compromises when we perceive our safety is at stake.

Across the world, attitudes to surveillance vary. Around half of Americans polled in 2013 found mass surveillance acceptable. France, Britain and the Philippines appeared more tolerant of mass surveillance compared to Sweden, Spain, and Germany, according to 2015 Amnesty International data.

Apathy and marginalisation

In 2015, philosopher Slavoj Žižek proclaimed that he did not care about surveillance (admittedly though suggesting that “perhaps here I preach arrogance”).

This position cannot be assumed by all members of society. Australian academic Kate Crawford argues the impact of data mining and surveillance is more significant for marginalised communities, including people of different races, genders and socioeconomic backgrounds. American academics Shoshana Magnet and Kelley Gates agree, writing:

[…] new surveillance technologies are regularly tested on marginalised communities that are unable to resist their intrusion.

A 2015 White House report found that big data can be used to perpetuate price discrimination among people of different backgrounds. It showed how data surveillance “could be used to hide more explicit forms of discrimination”.

According to Ira Rubinstein, a senior fellow at New York University’s Information Law Institute, ignorance and cynicism are often behind surveillance apathy. Users are either ignorant of the complex infrastructure of surveillance, or they believe they are simply unable to avoid it.

As the White House report stated, consumers “have very little knowledge” about how data is used in conjunction with differential pricing.

So in contrast to the oppressive panopticon (a circular prison with a central watchtower) as envisioned by philosopher Jeremy Bentham, we have what Siva Vaidhyanathan calls the “crytopticon”. The crytopticon is “not supposed to be intrusive or obvious. Its scale, its ubiquity, even its very existence, are supposed to go unnoticed”.

But Melanie Taylor, lead artist of the computer game Orwell (which puts players in the role of surveillance) noted that many simply remain indifferent despite heightened awareness:

That’s the really scary part: that Snowden revealed all this, and maybe nobody really cared.

The Facebook trap

Surveillance apathy can be linked to people’s dependence on “the system”. As one of my media students pointed out, no matter how much awareness users have regarding their social media surveillance, invariably people will continue using these platforms. This is because they are convenient, practical, and “we are creatures of habit”.

As University of Melbourne scholar Suelette Dreyfus noted in a Four Corners report on Facebook:

Facebook has very cleverly figured out how to wrap itself around our lives. It’s the family photo album. It’s your messaging to your friends. It’s your daily diary. It’s your contact list.

This, along with the complex algorithms Facebook and Google use to collect and use data to produce “filter bubbles” or “you loops” is another issue.

Protecting privacy

While some people are attempting to delete themselves from the network, others have come up with ways to avoid being tracked online.

Search engines such as DuckDuckGo or Tor Browser allow users to browse without being tracked. Lightbeam, meanwhile, allows users to see how their information is being tracked by third party companies. And MIT devised a system to show people the metadata of their emails, called Immersion.

Surveillance apathy is more disconcerting than surveillance itself. Our very attitudes about privacy will inform the structure of surveillance itself, so caring about it is paramount.

How Facebook Figures Out Everyone You’ve Ever Met

From Slashdot:

“I deleted Facebook after it recommended as People You May Know a man who was defense counsel on one of my cases. We had only communicated through my work email, which is not connected to my Facebook, which convinced me Facebook was scanning my work email,” an attorney told Gizmodo. Kashmir Hill, a reporter at the news outlet, who recently documented how Facebook figured out a connection between her and a family member she did not know existed, shares several more instances others have reported and explains how Facebook gathers information. She reports:

Behind the Facebook profile you’ve built for yourself is another one, a shadow profile, built from the inboxes and smartphones of other Facebook users. Contact information you’ve never given the network gets associated with your account, making it easier for Facebook to more completely map your social connections. Because shadow-profile connections happen inside Facebook’s algorithmic black box, people can’t see how deep the data-mining of their lives truly is, until an uncanny recommendation pops up. Facebook isn’t scanning the work email of the attorney above. But it likely has her work email address on file, even if she never gave it to Facebook herself. If anyone who has the lawyer’s address in their contacts has chosen to share it with Facebook, the company can link her to anyone else who has it, such as the defense counsel in one of her cases. Facebook will not confirm how it makes specific People You May Know connections, and a Facebook spokesperson suggested that there could be other plausible explanations for most of those examples — “mutual friendships,” or people being “in the same city/network.” The spokesperson did say that of the stories on the list, the lawyer was the likeliest case for a shadow-profile connection. Handing over address books is one of the first steps Facebook asks people to take when they initially sign up, so that they can “Find Friends.”

The problem with all this, Hill writes, is that Facebook doesn’t explicitly say the scale at which it would be using the contact information it gleans from a user’s address book. Furthermore, most people are not aware that Facebook is using contact information taken from their phones for these purposes.”

PrivacyTools

privacytools.io provides knowledge and tools to protect your privacy against global mass surveillance.

Panopticlick

Electronic Frontier Foundation’s Browser Privacy Tool checks if websites may be able to track you, even if you’ve limited or disabled cookies. Panopticlick tests your browser to see how unique it is based on the information it will share with sites it visits.

PRISM Break

Encrypt your communications and end your reliance on proprietary services.

EFF Surveillance Self-Defence

The Electronic Frontier Foundation’s guide to defending yourself and your friends from mass surveillance by using secure technology and developing awareness practices.

Stare Into The Lights My Pretties

Across the United States, police officers abuse confidential databases

“Police officers across the country misuse confidential law enforcement databases to get information on romantic partners, business associates, neighbors, journalists and others for reasons that have nothing to do with daily police work, an Associated Press investigation has found.
[…]In the most egregious cases, officers have used information to stalk or harass, or have tampered with or sold records they obtained.
[…]Unspecified discipline was imposed in more than 90 instances reviewed by AP. In many other cases, it wasn’t clear from the records if punishment was given at all. The number of violations was surely far higher since records provided were spotty at best, and many cases go unnoticed.

Among those punished: an Ohio officer who pleaded guilty to stalking an ex-girlfriend and who looked up information on her; a Michigan officer who looked up home addresses of women he found attractive; and two Miami-Dade officers who ran checks on a journalist after he aired unflattering stories about the department.

”It’s personal. It’s your address. It’s all your information, it’s your Social Security number, it’s everything about you,” said Alexis Dekany, the Ohio woman whose ex-boyfriend, a former Akron officer, pleaded guilty last year to stalking her. “And when they use it for ill purposes to commit crimes against you — to stalk you, to follow you, to harass you … it just becomes so dangerous.”

The misuse represents only a tiny fraction of the millions of daily database queries run legitimately during traffic stops, criminal investigations and routine police encounters. But the worst violations profoundly abuses systems that supply vital information on criminal suspects and law-abiding citizens alike. The unauthorized searches demonstrate how even old-fashioned policing tools are ripe for abuse, at a time when privacy concerns about law enforcement have focused mostly on more modern electronic technologies.”

Steven Rambam at HOPE XI, 2016

“First came the assault on privacy. Name, address, telephone, DOB, SSN, physical description, friends, family, likes, dislikes, habits, hobbies, beliefs, religion, sexual orientation, finances, every granular detail of a person’s life, all logged, indexed, analyzed and cross-referenced. Then came the gathering of location and communication data. Cell phones, apps, metro cards, license plate readers and toll tags, credit card use, IP addresses and authenticated logins, tower info, router proximity, networked “things” everywhere reporting on activity and location, astoundingly accurate facial recognition mated with analytics and “gigapixel” cameras and, worst of all, mindlessly self-contributed posts, tweets, and “check-ins,” all constantly reporting a subject’s location 24-7-365, to such a degree of accuracy that “predictive profiling” knows where you will likely be next Thursday afternoon. Today we are experiencing constant efforts to shred anonymity. Forensic linguistics, browser fingerprinting, lifestyle and behavior analysis, metadata of all types, HTML5, IPv6, and daily emerging “advances” in surveillance technologies – some seemingly science fiction but real – are combining to make constant, mobile identification and absolute loss of anonymity inevitable. And, now, predictably, the final efforts to homogenize: the “siloing” and Balkanization of the Internet. As Internet use becomes more and more self-restricted to a few large providers, as users increasingly never leave the single ecosystem of a Facebook or a Google, as the massive firehose of information on the Internet is “curated” and “managed” by persons who believe that they know best what news and opinions you should have available to read, see, and believe, the bias of a few will eventually determine what you believe. What is propaganda? What is truth? You simply won’t know. In a tradition dating back to the first HOPE conference, for three full hours Steven Rambam will detail the latest trends in privacy invasion and will demonstrate cutting-edge anonymity-shredding surveillance technologies. Drones will fly, a “privacy victim” will undergo digital proctology, a Q&A period will be provided, and fun will be had by all.”

Stare Into The Lights My Pretties

“Faceless” recognition can identify you even when you hide your face

“With widespread adoption among law enforcement, advertisers, and even churches, face recognition has undoubtedly become one of the biggest threats to privacy out there.

By itself, the ability to instantly identify anyone just by seeing their face already creates massive power imbalances, with serious implications for free speech and political protest.”

Microsoft pitches technology that can read facial expressions at political rallies.

“But more recently, researchers have demonstrated that even when faces are blurred or otherwise obscured, algorithms can be trained to identify people by matching previously-observed patterns around their head and body.

In a new paper uploaded to the ArXiv pre-print server, researchers at the Max Planck Institute in Saarbrücken, Germany demonstrate a method of identifying individuals even when most of their photos are un-tagged or obscured. The researchers’ system, which they call the “Faceless Recognition System,” trains a neural network on a set of photos containing both obscured and visible faces, then uses that knowledge to predict the identity of obscured faces by looking for similarities in the area around a person’s head and body.”

[…]

“In the past, Facebook has shown its face recognition algorithms can predict the identity of users when they obscure their face with 83% accuracy, using cues such as their stance and body type. But the researchers say their system is the first to do so using a trainable system that uses a full range of body cues surrounding blurred and blacked-out faces.”