Resources

Facebook increases so-called “deletion” pending time from 14 days to 30

Facebook has increased the pending time to process a deletion request to 30 days, a 16-day increase over the previous 14.

Is there a reason to believe that Facebook’s also making this change because more people are deleting their accounts? The movement [sic] has certainly gotten a lot of attention in recent months, with WhatsApp cofounder Brian Acton encouraging his nearly 43,000 Twitter followers in March to delete Facebook.

But Facebook’s last earnings report doesn’t give much indication that it is — the number of daily active users in the U.S. and Canada stayed flat, and even increased in Asia. The number of DAUs in Europe dropped by 4 million, but Facebook executives indicated that they believed that was mostly due to GDPR. Facebook followers will have to wait until the company’s next earnings report on October 30 to see if the dip in Europe DAUs is part of a continuing trend.

For those who want to delete Facebook and are willing to wait out 30 days, here’s a step-by-step guide on how to do so.

658

Facebook Is Giving Advertisers Access To Your Shadow Contact Information

Kashmir Hill, reporting for Gizmodo:

Last week, I ran an ad on Facebook targeted at a computer science professor named Alan Mislove. Mislove studies how privacy works on social networks and had a theory that Facebook is letting advertisers reach users with contact information collected in surprising ways. I was helping him test the theory by targeting him in a way Facebook had previously told me wouldn’t work. I directed the ad to display to a Facebook account connected to the landline number for Alan Mislove’s office, a number Mislove has never provided to Facebook. He saw the ad within hours.

One of the many ways that ads get in front of your eyeballs on Facebook and Instagram is that the social networking giant lets an advertiser upload a list of phone numbers or email addresses it has on file; it will then put an ad in front of accounts associated with that contact information. A clothing retailer can put an ad for a dress in the Instagram feeds of women who have purchased from them before, a politician can place Facebook ads in front of anyone on his mailing list, or a casino can offer deals to the email addresses of people suspected of having a gambling addiction. Facebook calls this a “custom audience.” You might assume that you could go to your Facebook profile and look at your “contact and basic info” page to see what email addresses and phone numbers are associated with your account, and thus what advertisers can use to target you. But as is so often the case with this highly efficient data-miner posing as a way to keep in contact with your friends, it’s going about it in a less transparent and more invasive way.

… Giridhari Venkatadri, Piotr Sapiezynski, and Alan Mislove of Northeastern University, along with Elena Lucherini of Princeton University, did a series of tests that involved handing contact information over to Facebook for a group of test accounts in different ways and then seeing whether that information could be used by an advertiser. They came up with a novel way to detect whether that information became available to advertisers by looking at the stats provided by Facebook about the size of an audience after contact information is uploaded. They go into this in greater length and technical detail in their paper [PDF]. They found that when a user gives Facebook a phone number for two-factor authentication or in order to receive alerts about new log-ins to a user’s account, that phone number became targetable by an advertiser within a couple of weeks.

Officially, Facebook denies the existence of shadow profiles. In a hearing with the House Energy & Commerce Committee earlier this year, when New Mexico Representative Ben Lujan asked Facebook CEO Mark Zuckerberg if he was aware of the so-called practice of building “shadow profiles”, Zuckerberg denied knowledge of it.

724

India’s Biometric Database Is Creating A Perfect Surveillance State — And U.S. Tech Companies Are On Board

Big U.S. technology companies are involved in the construction of one of the most intrusive citizen surveillance programs in history. For the past nine years, India has been building the world’s biggest biometric database by collecting the fingerprints, iris scans and photos of nearly 1.3 billion people. For U.S. tech companies like Microsoft, Amazon and Facebook, the project, called Aadhaar (which means “proof” or “basis” in Hindi), could be a gold mine. The CEO of Microsoft has repeatedly praised the project, and local media have carried frequent reports on consultations between the Indian government and senior executives from companies like Apple and Google (in addition to South Korean-based Samsung) on how to make tech products Aadhaar-enabled. But when reporters of HuffPost and HuffPost India asked these companies in the past weeks to confirm they were integrating Aadhaar into their products, only one company — Google — gave a definitive response.

That’s because Aadhaar has become deeply controversial, and the subject of a major Supreme Court of India case that will decide the future of the program as early as this month. Launched nine years ago as a simple and revolutionary way to streamline access to welfare programs for India’s poor, the database has become Indians’ gateway to nearly any type of service — from food stamps to a passport or a cell phone connection. Practical errors in the system have caused millions of poor Indians to lose out on aid. And the exponential growth of the project has sparked concerns among security researchers and academics that India is the first step toward setting up a surveillance society to rival China.

785

Social Media Manipulation Rising Globally, New Oxford Report Warns

A new report from Oxford University found that manipulation of public opinion over social media platforms is growing at a large scale, despite efforts to combat it. “Around the world, government agencies and political parties are exploiting social media platforms to spread junk news and disinformation, exercise censorship and control, and undermine trust in media, public institutions and science.”

“The number of countries where formally organized social media manipulation occurs has greatly increased, from 28 to 48 countries globally,” says Samantha Bradshaw, co-author of the report. “The majority of growth comes from political parties who spread disinformation and junk news around election periods. There are more political parties learning from the strategies deployed during Brexit and the U.S. 2016 Presidential election: more campaigns are using bots, junk news, and disinformation to polarize and manipulate voters.”

This is despite efforts by governments in many democracies introducing new legislation designed to combat fake news on the internet. “The problem with this is that these ‘task forces’ to combat fake news are being used as a new tool to legitimize censorship in authoritarian regimes,” says Professor Phil Howard, co-author and lead researcher on the OII’s Computational Propaganda project. “At best, these types of task forces are creating counter-narratives and building tools for citizen awareness and fact-checking.” Another challenge is the evolution of the mediums individuals use to share news and information. “There is evidence that disinformation campaigns are moving on to chat applications and alternative platforms,” says Bradshaw. “This is becoming increasingly common in the Global South, where large public groups on chat applications are more popular.”

797

Facebook is not alone in making everyone’s data available for whatever purpose

Most companies that trade in the sale and manipulation of personal information are private and beholden to few rules other than the bare minimum of those they establish themselves, to avoid scrutiny and be able to say “we told you so” if an angry individual ever comes calling. Even if a consumer is aware their data is being passed around, their ability to control it once it’s out there is virtually nil: if they request it be deleted from one data broker, it can simply be bought back from from one of several gigantic firms that have been storing it, too.

It is an open question what the actual effect of Cambridge Analytica’s work on the presidential election was, and what the outcome might have been without its influence (most references to its “psychographic” profiling in The New York Times’ story are appropriately skeptical). It would be hard to say without a lot more cooperation from the company and Facebook itself. But the leak by one of its researchers is an incredibly rare glimpse into a fairly routine process in an industry that is so staggeringly enormous and influential, not just in politics but in our personal, day-to-day existence, that it’s difficult to believe that it is anything but a mistake. But it isn’t, and wasn’t, a mistake. It is how things happened and are still happening every day.

767

Digital ads are starting to feel psychic

It seems like everyone these days has had a paranoiac moment where a website advertises something to you that you recently purchased or was gifted without a digital trail. According to a new website called New Organs, which collects first-hand accounts of these moments, “the feeling of being listened to is among the most common experiences, along with seeing the same ads on different websites, and being tracked via geo-location,” reports The Outline. The website was created by Tega Brain and Sam Lavigne, two Brooklyn-based artists whose work explores the intersections of technology and society…

784

Facebook is working on technology that allows users to type straight from their thoughts

Facebook is working on technology that allows users to type straight from their thoughts without having to lift a finger to work the keyboard. Regina Dugan, a former director of DARPA and the ex-head of Google’s experimental ATAP research group, said that the brain-computer interface had the capacity to revolutionize how human beings use and interact with technology. Currently, such brain-computer interface technology only exists in medical research but the Building 8 team is committed to bringing it to reality.

659

How Fracking Companies Use Facebook Surveillance to Ban Protest

Facebook is being used by oil and gas companies to clamp-down on protest. Three companies are currently seeking injunctions against protesters: British chemical giant INEOS, which has the largest number of shale gas drilling licenses in the UK; and small UK outfits UK Oil and Gas (UKOG), and Europa Oil and Gas. Among the thousands of pages of documents submitted to British courts by these companies are hundreds of Facebook and Twitter posts from anti-fracking protesters and campaign groups, uncovered by Motherboard in partnership with investigative journalists at DeSmog UK. They show how fracking companies are using social media surveillance carried out by a private firm to strengthen their cases in court by discrediting activists using personal information to justify banning their protests.

Included in the evidence supplied by the oil and gas companies to the courts are many personal or seemingly irrelevant campaigner posts. Some are from conversations on Facebook groups dedicated to particular protests or camps, while others have been captured from individuals’ own profile pages. For instance, a picture of a mother with her baby at a protest was submitted as part of the Europa Oil and Gas case. Another screenshot of a post in the Europa bundle shows a hand-written note from one of the protesters’ mothers accompanying a care package with hand-knitted socks that was sent to an anti-fracking camp. One post included in the UKOG hearing bundle shows two protesters sharing a pint in the sun — not at a protest camp, nor shared on any of the campaign pages’ Facebook groups. A screenshot from INEOS’s hearing bundle shows posts from a protester to his own Facebook wall regarding completely unrelated issues such as prescription drugs, and a generic moan about his manager.

It is not always clear how such posts are being used against these activists except to portray them in a bad light, and a judge could disregard them as irrelevant to the case. But their often personal nature raises questions about how these companies were scrutinising the private lives of campaigners to justify shutting down their protests.

In 2011, the UK government ordered a public inquiry led by Lord Justice Leveson into the culture, practices and ethics of the British press after a leading tabloid newspaper was convicted of phone hacking. One of the activists subject to surveillance, Jon O’Houston, who has been part of the Broadford Bridge Protection Camp, said he felt it was equivalent to the phone hacking cases, which led to the Leveson review.
Advertisement

“What’s said in the groups is generally taken either out of context or cherry-picked”, O’Houston told Motherboard. “When taken out of context, you can make anything look bad or good.”

Despite his posts being used to strengthen the case for injunctions against protesters, he said he wouldn’t necessarily change his behaviour on social media.

“I don’t think I’d ever change the way we operate our groups. There’s too much information there already. If someone wants to go back five years and have a look at what was going on in these groups five years ago, they could do that,” he said.

“It would be very difficult if we stopped using Facebook as a platform,” he added. “We would lose so much of that important stuff. In a way, it’s got us trapped.”

752

Facebook, Google, and Microsoft Use Design to Trick You Into Handing Over Your Data, New Report Warns

A study from the Norwegian Consumer Council dug into the underhanded tactics used by Microsoft, Facebook, and Google to collect user data. “The findings include privacy intrusive default settings, misleading wording, giving users an illusion of control, hiding away privacy-friendly choices, take-it-or-leave-it choices, and choice architectures where choosing the privacy friendly option requires more effort for the users,” states the report, which includes images and examples of confusing design choices and strangely worded statements involving the collection and use of personal data.

Google makes opting out of personalized ads more of a chore than it needs to be and uses multiple pages of text, unclear design language, and, as described by the report, “hidden defaults” to push users toward the company’s desired action. “If the user tried to turn the setting off, a popup window appeared explaining what happens if Ads Personalization is turned off, and asked users to reaffirm their choice,” the report explained. “There was no explanation about the possible benefits of turning off Ads Personalization, or negative sides of leaving it turned on.” Those who wish to completely avoid personalized ads must traverse multiple menus, making that “I agree” option seem like the lesser of two evils.

In Windows 10, if a user wants to opt out of “tailored experiences with diagnostic data,” they have to click a dimmed lightbulb, while the symbol for opting in is a brightly shining bulb, says the report.

Another example has to do with Facebook. The social media site makes the “Agree and continue” option much more appealing and less intimidating than the grey “Manage Data Settings” option. The report says the company-suggested option is the easiest to use. “This ‘easy road’ consisted of four clicks to get through the process, which entailed accepting personalized ads from third parties and the use of face recognition. In contrast, users who wanted to limit data collection and use had to go through 13 clicks.”

793

Facebook gave firms broad access to data on users, friends

Facebook reportedly formed data-sharing partnerships with dozens of device makers, including Apple and Samsung, giving them access to information on users, as well as on users’ friends.

The New York Times revealed the extent of the partnerships on Sunday, shedding new light on the social media giant’s behavior related to customer data following a scandal involving the political consulting firm Cambridge Analytica.

The Times found that the company made at least 60 such deals over the past decade, many of which are still in effect, allowing the other companies access to personal data of Facebook users and their friends.

The partnerships may have also violated a 2011 Federal Trade Commission (FTC) consent decree, according to the Times, which Facebook officials denied.

The report comes as Facebook is under scrutiny for its handling of private data after it was revealed that Cambridge Analytica accessed millions of users’ private information.

The partnerships allowed companies like Apple, Blackberry and Amazon to offer users Facebook features, like the ability to post photos, directly from a device without using the Facebook app.

The Times found that the partnerships allowed outside companies to access personal user data like relationship status, religious and political affiliations, work history and birthdays, as well as the information of users’ Facebook friends, even if the friends had blocked Facebook from sharing their information with third parties.

Facebook officials told the Times in interviews that the data-sharing partnerships were different from app developers’ access to Facebook users, and that the device makers are considered “extensions” of the social network.

But security experts and former Facebook engineers expressed concerns that the partnerships offered companies practically unfettered access to hundreds of thousands of Facebook users without their knowledge.

“It’s like having door locks installed, only to find out that the locksmith also gave keys to all of his friends so they can come in and rifle through your stuff without having to ask you for permission,” said Ashkan Soltani, a former FTC chief technologist, according to the Times.

Facebook began ending the partnerships in recent months, but the Times reported that many are still in effect.

727

Screen watching at all-time high

With Netflix and Amazon Prime, Facebook Video and YouTube, it’s tempting to imagine that the tech industry destroyed TV. The world is more than 25 years into the web era, after all, more than half of American households have had home Internet for 15 years, and the current smartphone paradigm began more than a decade ago. But no. Americans still watch an absolutely astounding amount of traditional television.

In fact, television viewing didn’t peak until 2009-2010, when the average American household watched 8 hours and 55 minutes of TV per day. And the ’00s saw the greatest growth in TV viewing time of any decade since Nielsen began keeping track in 1949-1950: Americans watched 1 hour and 23 minutes more television at the end of the decade than at the beginning. Run the numbers and you’ll find that 32 percent of the increase in viewing time from the birth of television to its peak occurred in the first years of the 21st century.

Over the last 8 years, all the new, non-TV things — Facebook, phones, YouTube, Netflix — have only cut about an hour per day from the dizzying amount of TV that the average household watches. Americans are still watching more than 7 hours and 50 minutes per household per day.

737

Facebook accused of conducting mass surveillance through its apps

Facebook used its apps to gather information about users and their friends, including some who had not signed up to the social network, reading their text messages, tracking their locations and accessing photos on their phones, a court case in California alleges.

The claims of what would amount to mass surveillance are part of a lawsuit brought against the company by the former startup Six4Three, listed in legal documents filed at the superior court in San Mateo as part of a court case that has been ongoing for more than two years.

It alleges that Facebook used a range of methods, some adapted to the different phones that users carried, to collect information it could use for commercial purposes.

“Facebook continued to explore and implement ways to track users’ location, to track and read their texts, to access and record their microphones on their phones, to track and monitor their usage of competitive apps on their phones, and to track and monitor their calls,” one court document says.

But all details about the mass surveillance scheme have been redacted on Facebook’s request in Six4Three’s most recent filings. Facebook claims these are confidential business matters.

Other alleged projects included one to remotely activate Bluetooth, allowing the company to pinpoint a user’s location without them explicitly agreeing to it. Another involved the development of privacy settings with an early end date that was not flagged to users, letting them expire without notice, the court documents claim.

Facebook admitted recently that it had collected call and text message data from users, but said it only did so with prior consent. However the Guardian has reported that it logged some messages without explicitly notifying users. The company could not see text messages for iPhone users but could access any photos taken on a phone or stored on the built-in “camera roll” archive system, the court case alleged. It has not disclosed how they were analysed.

Facebook has not fully disclosed the manner in which it pre-processes photos on the iOS camera roll, meaning if a user has any Facebook app installed on their iPhone, then Facebook accesses and analyses the photos the user takes and/or stores on the iPhone, the complainant alleges.

751

Why the Facebook ‘scandal’ impacts you more than you think

It’s not just the data you choose to share.

By now we all know the story: Facebook allowed apps on its social media platform which enabled a shady outfit called Cambridge Analytica to scrape the profiles of 87 million users, in order to serve up targeted ads to benefit the Trump election campaign in 2016.  More than 300,000 Australian users of Facebook were caught up in the data harvesting.

But serving up ads in a foreign election campaign is not the whole story.  Facebook, and other companies involved in data mining, are invading our privacy and harming us economically and socially, in ways that are only just starting to become clear.

And it’s not just the data you choose to share. The information you post is not the whole story.  It’s only the tip of the iceberg of data that Facebook has collected about you.

Every time you go online you leave a trail of digital breadcrumbs.  Facebook has been busily sweeping up those breadcrumbs, and using them to categorise and profile you.  Facebook obviously knows when you click on a Facebook ‘like’ button; but also, unless a web developer has gone out of their way to find tools to block them (as we have done for our Salinger Privacy blog), Facebook knows every time you simply look at a website that has a Facebook ‘like’ button somewhere on it.

So if you only post or ‘like’ stories about inspirational mountain climbers and funny cat videos, but also do things online that you don’t share with your family, friends or work colleagues (like looking at stories about abortion or dealing with infidelity, Googling how to manage anxiety or erectile dysfunction, whingeing about your employer in a chatroom, or spending hours reviewing dating profiles, gambling or shopping obsessively for shoes)  — Facebook has you pegged anyway.

Plus, Facebook obtains data from other sources which know about your offline purchases, to build an even richer picture of who you really are.  And of course, Facebook may have access to your address book, your location history, the contents of your private messages, and depending on your brand of phone, possibly even a history of your phone calls and text messages.

All that information is used to draw inferences and assumptions about your preferences, and predict your likely behaviour.  The results are then used to categorise, profile and ultimately target you, in a process usually described as ‘online behavioural advertising’.

It’s not ‘just ads’

The objective of online behavioural advertising is to predict your purchasing interests and drive a purchase decision.  So far, the same as any other advertising.  But online, the implications for us as individuals are much greater.

Facebook’s promise to advertisers is that it can show their ad to exactly who the advertiser wants, and exclude everybody else.

However, by allowing exclusion, the platform also allows discrimination.  Facebook has been caught allowing advertisers to target — and exclude — people on the basis of their ‘ethnic affinity’, amongst other social, demographic, racial and religious characteristics.  So a landlord with an ad for rental housing could prevent people profiled as ‘single mothers’ from ever seeing their ad.  An employer could prevent people identifying as Jewish from seeing a job ad.  A bank could prevent people categorised as African Americans from seeing an ad for a home loan.

Existing patterns of social exclusion, economic inequality and discrimination are further entrenched by micro-targeted advertising, which is hidden from public view and regulatory scrutiny.

Data boy. Mark Zuckerberg testifies in Washington. Image: Getty.

Predictive analytics can narrow or alter your life choices

Once we move beyond straight-up advertising and into predictive analytics, the impact on individual autonomy becomes more acute.  Big Data feeds machine learning, which finds patterns in the data, from which new rules (algorithms) are designed.  Algorithms predict how a person will behave, and suggest how they should be treated.

Algorithms can lead to price discrimination, like surge pricing based on Uber knowing how much phone battery life you have left.  Or market exclusion, like Woolworths only offering car insurance to customers it has decided are low risk, based on an assessment of the groceries they buy.

Banks have been predicting the risk of a borrower defaulting on a loan for decades, but now algorithms are also used to determine who to hire, predict when a customer is pregnant, and deliver targeted search results to influence how you vote.

Algorithms are also being used to predict the students at risk of failure, the prisoners at risk of re-offending, and who is at risk of suicide and then launching interventions accordingly.  However, even leaving aside the accuracy of those predictions, interventions are not necessarily well-intentioned.  It was revealed last year that Australian Facebook executives were touting to advertisers their ability to target psychologically vulnerable teenagers. 

Automated decision-making diminishes our autonomy, by narrowing or altering our market and life choices, in ways that are not clear to us.  People already in a position of economic or social disadvantage face the additional challenge of trying to disprove or beat an invisible algorithm.

In a predictive and pre-emptive world, empathy, forgiveness, rehabilitation, redemption, individual dignity, autonomy and free will are programmed out of our society.

Fiddling with users’ privacy settings on Facebook won’t fix anything.  If we want our lives to be ruled by human values and individual dignity, instead of by machines fed on questionable data, we need robust, enforced and globally effective privacy laws.

A new European privacy law commences later this month.  The obligations include that businesses and governments must offer understandable explanations of how their algorithms work, and allow people to seek human review of automated decision-making.  This is a step in the right direction, which Australia, the US and the rest of the world should follow.

951

Facebook silently enables facial recognition abilities for users outside EU and Canada

Facebook is now informing users around the world that it’s rolling out facial recognition features. In December, we reported the features would be coming to the platform; that roll out finally appears to have begun. It should be noted that users in the European Union and Canada will not be notified because laws restrict this type of activity in those areas.

With the new tools, you’ll be able to find photos that you’re in but haven’t been tagged in; they’ll help you protect yourself against strangers using your photo; and Facebook will be able to tell people with visual impairments who’s in their photos and videos. By default, Facebook warns that this feature is enabled but can be switched off at any time; additionally, the firm says it may add new capabilities at any time.

While Facebook may want its users to “feel confident” uploading pictures online, it will likely give many other users the heebie-jeebies when they think of the colossal database of faces that Facebook has and what it could do with all that data. Even non-users should be cautious which photos they include themselves in if they don’t want to be caught up in Facebook’s web of data.

849

Is social media causing childhood depression?

Rangan Chatterjee is a GP and says he has seen plenty of evidence of the link between mental ill-health in youngsters and their use of social media.

One 16 year-old boy was referred to him after he self-harmed and ended up in A&E.

“The first thought was to put him on anti-depressants but I chatted to him and it sounded like his use of social media was having a negative impact on his health.”

So Dr Chatterjee suggested a simple solution – the teenager should attempt to wean himself off social media, restricting himself to just an hour before he went to bed. Over the course of a few weeks, he should extend this to two hours at night and two in the morning.

“He reported a significant improvement in his wellbeing and, after six months, I had a letter from his mother saying he was happier at school and integrated into the local community.”

That and similar cases have led him to question the role social media plays in the lives of young people.

“Social media is having a negative impact on mental health,” he said. “I do think it is a big problem and that we need some rules. How do we educate society to use technology so it helps us rather than harms us?”

A 2017 study by The Royal Society of Public Health asked 1,500 young people aged 11-25 to track their moods while using the five most popular social media sites.

It suggested Snapchat and Instagram were the most likely to inspire feelings of inadequacy and anxiety. YouTube had the most positive influence.

Seven in 10 said Instagram made them feel worse about body image and half of 14-24-year-olds reported Instagram and Facebook exacerbated feelings of anxiety. Two-thirds said Facebook made cyber-bullying worse.

Consultant psychiatrist Louise Theodosiou says one of the clearest indications children are spending too long on their phones is their behaviour during a session with a psychiatrist.

“Two or three years ago, it was very unusual for a child to answer their phone or text during an appointment. But now it is common,” said the Royal Manchester Children’s Hospital doctor.

She has seen a rise in cases where social media is a contributing factor in teenage depression, anxiety and other mental health issues. These problems are often complex and wide-ranging – from excessive use of gaming or social media sites to feelings of inadequacy brought on by a constant bombardment of social media images of other people’s lives, to cyber-bullying.

Often such children will refuse to travel to psychiatrist appointments, so a range of professionals have to make home visits to deal with the issue. It can take months to persuade them to leave their bedrooms.

“These kids are living in a fictional world, sometimes to the detriment of their physical health. They might have physical ill-health, like toothache, but they are still not wanting to leave their virtual worlds,” she said.

Dr Theodosiou has seen first-hand how difficult it can be for parents. She has heard of some sleeping with the home router to make sure the children cannot connect to the wi-fi in the middle of the night.

Even for those children whose social media use may be judged normal, there are still dangers in the way the internet has become a conduit into the lives of friends and celebrities.

794

Google and Facebook are watching our every move online

You may know that hidden trackers lurk on most websites you visit, soaking up your personal information. What you may not realize, though, is 76 percent of websites now contain hidden Google trackers, and 24 percent have hidden Facebook trackers, according to the Princeton Web Transparency & Accountability Project. The next highest is Twitter with 12 percent. It is likely that Google or Facebook are watching you on many sites you visit, in addition to tracking you when using their products. As a result, these two companies have amassed huge data profiles on each person, which can include your interests, purchases, search, browsing and location history, and much more. They then make your sensitive data profile available for invasive targeted advertising that can follow you around the Internet.

So how do we move forward from here? Don’t be fooled by claims of self-regulation, as any useful long-term reforms of Google and Facebook’s data privacy practices fundamentally oppose their core business models: hyper-targeted advertising based on more and more intrusive personal surveillance. Change must come from the outside. Unfortunately, we’ve seen relatively little from Washington. Congress and federal agencies need to take a fresh look at what can be done to curb these data monopolies. They first need to demand more algorithmic and privacy policy transparency, so people can truly understand the extent of how their personal information is being collected, processed and used by these companies. Only then can informed consent be possible. They also need to legislate that people own their own data, enabling real opt-outs. Finally, they need to restrict how data can be combined including being more aggressive at blocking acquisitions that further consolidate data power, which will pave the way for more competition in digital advertising. Until we see such meaningful changes, consumers should vote with their feet.

845

Facebook Really Wants You to Come Back

The social network is getting aggressive with people who don’t log in often, working to keep up its engagement numbers.

It’s been about a year since Rishi Gorantala deleted the Facebook app from his phone, and the company has only gotten more aggressive in its emails to win him back. The social network started out by alerting him every few days about friends that had posted photos or made comments—each time inviting him to click a link and view the activity on Facebook. He rarely did.

Then, about once a week in September, he started to get prompts from a Facebook security customer-service address. “It looks like you’re having trouble logging into Facebook,” the emails would say. “Just click the button below and we’ll log you in. If you weren’t trying to log in, let us know.” He wasn’t trying. But he doesn’t think anybody else was, either.

“The content of mail they send is essentially trying to trick you,” said Gorantala, 35, who lives in Chile. “Like someone tried to access my account so I should go and log in.”

875

Facebook should be ‘regulated like cigarette industry’, says tech CEO

Facebook should be regulated like a cigarette company, because of the addictive and harmful properties of social media, according to Salesforce chief executive Marc Benioff.

Last week, venture capitalist Roger McNamee – an early investor in Facebook – wrote a Guardian column warning that the company would would have to “address the harm the platform has caused through addiction and exploitation by bad actors”.

“I was once Mark Zuckerberg’s mentor, but I have not been able to speak to him about this. Unfortunately, all the internet platforms are deflecting criticism and leaving their users in peril,” McNamee wrote.

Earlier, Sean Parker, Facebook’s first President, had described the business practice of social media firms as “a social-validation feedback loop … exactly the kind of thing that a hacker like myself would come up with, because you’re exploiting a vulnerability in human psychology”. Parker now describes himself as “something of a conscientious objector” to social media.

As part of its attempt to win back control of the narrative, Facebook has announced it will begin taking into account how trusted a publisher is as part of its News Feed algorithm. The company’s metric for determining trust, however, is a simple two-question survey, causing some to query its potential.

802

How Facebook’s Political Unit Enables the Dark Art of Digital Propaganda

Under fire for Facebook Inc.’s role as a platform for political propaganda, co-founder Mark Zuckerberg has punched back, saying his mission is above partisanship. “We hope to give all people a voice and create a platform for all ideas,” Zuckerberg wrote in September after President Donald Trump accused Facebook of bias. Zuckerberg’s social network is a politically agnostic tool for its more than 2 billion users, he has said. But Facebook, it turns out, is no bystander in global politics. What he hasn’t said is that his company actively works with political parties and leaders including those who use the platform to stifle opposition — sometimes with the aid of “troll armies” that spread misinformation and extremist ideologies.

The initiative is run by a little-known Facebook global government and politics team that’s neutral in that it works with nearly anyone seeking or securing power. The unit is led from Washington by Katie Harbath, a former Republican digital strategist who worked on former New York Mayor Rudy Giuliani’s 2008 presidential campaign. Since Facebook hired Harbath three years later, her team has traveled the globe helping political clients use the company’s powerful digital tools. In some of the world’s biggest democracies — from India and Brazil to Germany and the U.K. — the unit’s employees have become de facto campaign workers. And once a candidate is elected, the company in some instances goes on to train government employees or provide technical assistance for live streams at official state events.

954

Renée DiResta: The Lunatics are Running the Asylum

511