Resources

The Rise of Techno-authoritarianism

In the behavior of tech companies, the Atlantic’s executive editor warns us about “a clear and coherent ideology that is seldom called out for what it is: authoritarian technocracy. As the most powerful companies in Silicon Valley have matured, this ideology has only grown stronger, more self-righteous, more delusional, and — in the face of rising criticism — more aggrieved.”

The new technocrats are ostentatious in their use of language that appeals to Enlightenment values — reason, progress, freedom — but in fact they are leading an antidemocratic, illiberal movement. Many of them profess unconditional support for free speech, but are vindictive toward those who say things that do not flatter them. They tend to hold eccentric beliefs…. above all, that their power should be unconstrained. The systems they’ve built or are building — to rewire communications, remake human social networks, insinuate artificial intelligence into daily life, and more — impose these beliefs on the population, which is neither consulted nor, usually, meaningfully informed. All this, and they still attempt to perpetuate the absurd myth that they are the swashbuckling underdogs.

The article calls out Marc Andreessen’s Techno-Optimist Manifesto for saying “We believe in adventure… rebelling against the status quo, mapping uncharted territory, conquering dragons, and bringing home the spoils for our community…” (The Atlantic concludes Andreessen’s position “serves only to absolve him and the other Silicon Valley giants of any moral or civic duty to do anything but make new things that will enrich them, without consideration of the social costs, or of history.”)

The article notes that Andreessen “also identifies a list of enemies and ‘zombie ideas’ that he calls upon his followers to defeat, among them ‘institutions’ and ‘tradition.'” But the Atlantic makes a broader critique not just of Andreessen but of other Silicon Valley elites. “The world that they have brought into being over the past two decades is unquestionably a world of reckless social engineering, without consequence for its architects, who foist their own abstract theories and luxury beliefs on all of us…”

None of this happens without the underlying technocratic philosophy of inevitability — that is, the idea that if you can build something new, you must. “In a properly functioning world, I think this should be a project of governments,” [Sam] Altman told my colleague Ross Andersen last year, referring to OpenAI’s attempts to develop artificial general intelligence. But Altman was going to keep building it himself anyway. Or, as Zuckerberg put it to The New Yorker many years ago: “Isn’t it, like, inevitable that there would be a huge social network of people? … If we didn’t do this someone else would have done it.”

The article includes this damning chat log from a 2004 conversation Zuckerberg had with a friend:

Zuckerberg: If you ever need info about anyone at Harvard.
Zuckerberg: Just ask.
Zuckerberg: I have over 4,000 emails, pictures, addresses, SNS
Friend: What? How’d you manage that one?
Zuckerberg: People just submitted it.
Zuckerberg: I don’t know why.
Zuckerberg: They “trust me”
Zuckerberg: Dumb fucks.’

But the article also reminds us that in Facebook’s early days, “Zuckerberg listed ‘revolutions’ among his interests.”

The main dangers of authoritarian technocracy are not at this point political, at least not in the traditional sense. Still, a select few already have authoritarian control, more or less, to establish the digital world’s rules and cultural norms, which can be as potent as political power…

[I]n recent years, it has become clear that regulation is needed, not least because the rise of technocracy proves that Silicon Valley’s leaders simply will not act in the public’s best interest. Much should be done to protect children from the hazards of social media, and to break up monopolies and oligopolies that damage society, and more. At the same time, I believe that regulation alone will not be enough to meaningfully address the cultural rot that the new technocrats are spreading…. We do not have to live in the world the new technocrats are designing for us. We do not have to acquiesce to their growing project of dehumanization and data mining. Each of us has agency.

No more “build it because we can.” No more algorithmic feedbags. No more infrastructure designed to make the people less powerful and the powerful more controlling. Every day we vote with our attention; it is precious, and desperately wanted by those who will use it against us for their own profit and political goals. Don’t let them.

The article specifically recommends “challenging existing norms about the use of apps and YouTube in classrooms, the ubiquity of smartphones in adolescent hands, and widespread disregard for individual privacy. People who believe that we all deserve better will need to step up to lead such efforts.”
“Universities should reclaim their proper standing as leaders in developing world-changing technologies for the good of humankind. (Harvard, Stanford, and MIT could invest in creating a consortium for such an effort — their endowments are worth roughly $110 billion combined.)”

38

The AI firm that conducted ‘state surveillance’ of social media posts

An industrial estate in Yorkshire is an unlikely location for a state surveillance operation. But these blank-looking warehouses are home to an artificial intelligence (AI) company used by the Government to monitor people’s posts on social media.

Logically has been paid more than £1.2 million of taxpayers’ money to analyse what the Government terms “disinformation” – false information deliberately seeded online – and “misinformation”, which is false information that has been spread inadvertently.

It does this by “ingesting” material from more than hundreds of thousands of media sources and “all public posts on major social media platforms”, using AI to identify those that are potentially problematic.

It also has a “partnership” with Facebook, which appears to grant Logically’s fact-checkers huge influence over the content other people see.

A joint press release issued in July 2021 suggests that Facebook will limit the reach of certain posts if Logically says they are untrue.

“When Logically rates a piece of content as false, Facebook will significantly reduce its distribution so that fewer people see it, apply a warning label to let people know that the content has been rated false, and notify people who try to share it,” states the press release.

166

Facebook Has 3 Billion Users

Now, with the biggest thorn in its side — TikTok — facing heightened government scrutiny amid growing tensions between the U.S. and China, Facebook could, perhaps, position itself as a viable, domestic-bred alternative. There’s just one problem: young adults like Devin Walsh (anecdote in the story) have moved on. […] Today, 3 billion people check Facebook each month. That’s more than a third of the world’s population. And 2 billion log in every day. Yet it still finds itself in a battle for relevancy, and its future, after two decades of existence. For younger generations — those who signed up in middle school, or those who are now in middle school, it’s decidedly not the place to be. Without this trend-setting demographic, Facebook, still the main source of revenue for parent company Meta, risks fading into the background — utilitarian but boring, like email.

176

Meta Sues Surveillance Company for Scraping Data With Fake Facebook Accounts

Meta has filed a legal complaint against a company for allegedly creating tens of thousands of fake Facebook accounts to scrape user data and provide surveillance services for clients. From a report:
The firm, Voyager Labs, bills itself as “a world leader in advanced AI-based investigation solutions.” What this means in practice is analyzing social media posts en masse in order to make claims about individuals. In 2021, for example, The Guardian reported how Voyager Labs sold its services to the Los Angeles Police Department, with the company claiming to predict which individuals were likely to commit crimes in the future.

Meta announced the legal action in a blog post on January 12th, claiming that Voyager Labs violated its terms of service. According to a legal filing issued on November 11th, Meta alleges that Voyager Labs created over 38,000 fake Facebook user accounts and used its surveillance software to gather data from Facebook and Instagram without authorization. Voyager Labs also collected data from sites including Twitter, YouTube, and Telegram.

185

Meta’s New Text-to-Video AI Generator is Like DALL-E for Video

A team of machine learning engineers from Facebook’s parent company Meta has unveiled a new system called Make-A-Video. As the name suggests, this AI model allows users to type in a rough description of a scene, and it will generate a short video matching their text. The videos are clearly artificial, with blurred subjects and distorted animation, but still represent a significant development in the field of AI content generation.

“Generative AI research is pushing creative expression forward by giving people tools to quickly and easily create new content,” said Meta in a blog post announcing the work. “With just a few words or lines of text, Make-A-Video can bring imagination to life and create one-of-a-kind videos full of vivid colors and landscapes.” In a Facebook post, Meta CEO Mark Zuckerberg described the work as “amazing progress,” adding: “It’s much harder to generate video than photos because beyond correctly generating each pixel, the system also has to predict how they’ll change over time.”

140

Facebook Misinformation Is Bad Enough, The Metaverse Will Be Worse

The Rand Corporation is an American (nonprofit) think tank. And veliath (Slashdot reader #5,435) spotted their recent warning about “a plausible scenario that could soon take place in the metaverse.”
A political candidate is giving a speech to millions of people. While each viewer thinks they are seeing the same version of the candidate, in virtual reality they are actually each seeing a slightly different version. For each and every viewer, the candidate’s face has been subtly modified to resemble the viewer…. The viewers are unaware of any manipulation of the image. Yet they are strongly influenced by it: Each member of the audience is more favorably disposed to the candidate than they would have been without any digital manipulation.

This is not speculation. It has long been known that mimicry can be exploited as a powerful tool for influence. A series of experiments by Stanford researchers has shown that slightly changing the features of an unfamiliar political figure to resemble each voter made people rate politicians more favorably. The experiments took pictures of study participants and real candidates in a mock-up of an election campaign. The pictures of each candidate were modified to resemble each participant. The studies found that even if 40 percent of the participant’s features were blended into the candidate’s face, the participants were entirely unaware the image had been manipulated.

In the metaverse, it’s easy to imagine this type of mimicry at a massive scale.

At the heart of all deception is emotional manipulation. Virtual reality environments, such as Facebook’s (now Meta’s) metaverse, will enable psychological and emotional manipulation of its users at a level unimaginable in today’s media…. We are not even close to being able to defend users against the threats posed by this coming new medium…. In VR, body language and nonverbal signals such as eye gaze, gestures, or facial expressions can be used to communicate intentions and emotions. Unlike verbal language, we often produce and perceive body language subconsciously….

We must not wait until these technologies are fully realized to consider appropriate guardrails for them. We can reap the benefits of the metaverse while minimizing its potential for great harm.

They recommend developing technology that detect the application of this kind of VR manipulation.

“Society did not start paying serious attention to classical social media — meaning Facebook, Twitter, and the like — until things got completely out of hand. Let us not make the same mistake as social media blossoms into the metaverse.”

161

Facebook Button is Disappearing From Websites as Consumers Demand Better Privacy

Other big brands, including Best Buy, Ford Motor, Pottery Barn, Nike, Patagonia, Match and Amazon’s video-streaming service Twitch have removed the ability to sign on with Facebook. It’s a marked departure from just a few years ago, when the Facebook login was plastered all over the internet, often alongside buttons that let you sign in with Google, Twitter or LinkedIn. Jen Felch, Dell’s chief digital and chief information officer, said people stopped using social logins, for reasons that include concerns over security, privacy and data-sharing.

217

Facebook Engineers: We Have No Idea Where We Keep All Your Personal Data

In March, two veteran Facebook engineers found themselves grilled about the company’s sprawling data collection operations in a hearing for the ongoing lawsuit over the mishandling of private user information stemming from the Cambridge Analytica scandal.

The hearing, a transcript of which was recently unsealed, was aimed at resolving one crucial issue: What information, precisely, does Facebook store about us, and where is it? The engineers’ response will come as little relief to those concerned with the company’s stewardship of billions of digitized lives: They don’t know.

The dispute over where Facebook stores data arose when, as part of the litigation, now in its fourth year, the court ordered Facebook to turn over information it had collected about the suit’s plaintiffs. The company complied but provided data consisting mostly of material that any user could obtain through the company’s publicly accessible “Download Your Information” tool.

Facebook contended that any data not included in this set was outside the scope of the lawsuit, ignoring the vast quantities of information the company generates through inferences, outside partnerships, and other nonpublic analysis of our habits — parts of the social media site’s inner workings that are obscure to consumers. Briefly, what we think of as “Facebook” is in fact a composite of specialized programs that work together when we upload videos, share photos, or get targeted with advertising. The social network wanted to keep data storage in those nonconsumer parts of Facebook out of court.

In 2020, the judge disagreed with the company’s contention, ruling that Facebook’s initial disclosure had indeed been too sparse and that the company must reveal data obtained through its oceanic ability to surveil people across the internet and make monetizable predictions about their next moves.

Facebook’s stonewalling has been revealing on its own, providing variations on the same theme: It has amassed so much data on so many billions of people and organized it so confusingly that full transparency is impossible on a technical level.

147

Meta Sued For Violating Patient Privacy With Data Tracking Tool

Facebook’s parent company Meta and major US hospitals violated medical privacy laws with a tracking tool that sends health information to Facebook, two proposed class-action lawsuits allege. The lawsuits, filed in the Northern District of California in June and July, focus on the Meta Pixel tracking tool. The tool can be installed on websites to provide analytics on Facebook and Instagram ads. It also collects information about how people click around and input information into those websites.

An investigation by The Markup in early June found that 33 of the top 100 hospitals in the United States use the Meta Pixel on their websites. At seven hospitals, it was installed on password-protected patient portals. The investigation found that the tool was sending information about patient health conditions, doctor appointments, and medication allergies to Facebook.

150

Facebook exec blames society for COVID misinformation

Longtime Facebook veteran Andrew Bosworth insists that political and COVID-19 misinformation are societal problems rather than issues that have been magnified by social networks.

Facebook and other social networks have played a significant role in vaccine hesitancy and the spread of political misinformation. “Individual humans are the ones who choose to believe or not believe a thing. They are the ones who choose to share or not share a thing,” Bosworth said in an interview with “Axios on HBO.” “I don’t feel comfortable at all saying they don’t have a voice because I don’t like what they said.” Bosworth has been leading Facebook’s hardware efforts, including those in virtual and augmented reality. Next year he will become CTO for Meta, Facebook’s parent company. Asked whether vaccine hesitancy would be the same with or without social media, Bosworth defended Facebook’s role in combatting COVID, noting that the company ran one of the largest information campaigns in the world to spread authoritative information.

337

How Facebook and Google Actually Fund the Creation of Misinformation

MIT’s Technology Review shares data from a Facebook-run tool called CrowdTangle. It shows that by 2018 in the nation of Myanmar (population: 53 million), ” All the engagement had instead gone to fake news and clickbait websites.

“In a country where Facebook is synonymous with the internet, the low-grade content overwhelmed other information sources.”
[T]he sheer volume of fake news and clickbait acted like fuel on the flames of already dangerously high ethnic and religious tensions. It shifted public opinion and escalated the conflict, which ultimately led to the death of 10,000 Rohingya, by conservative estimates, and the displacement of 700,000 more. In 2018, a United Nations investigation determined that the violence against the Rohingya constituted a genocide and that Facebook had played a “determining role” in the atrocities. Months later, Facebook admitted it hadn’t done enough “to help prevent our platform from being used to foment division and incite offline violence.” Over the last few weeks, the revelations from the Facebook Papers, a collection of internal documents provided to Congress and a consortium of news organizations by whistleblower Frances Haugen, have reaffirmed what civil society groups have been saying for years: Facebook’s algorithmic amplification of inflammatory content, combined with its failure to prioritize content moderation outside the US and Europe, has fueled the spread of hate speech and misinformation, dangerously destabilizing countries around the world.

But there’s a crucial piece missing from the story. Facebook isn’t just amplifying misinformation.

The company is also funding it.

An MIT Technology Review investigation, based on expert interviews, data analyses, and documents that were not included in the Facebook Papers, has found that Facebook and Google are paying millions of ad dollars to bankroll clickbait actors, fueling the deterioration of information ecosystems around the world.

Facebook pays them for permission to open their content within Facebook’s app (where Facebook controls the advertising) rather than having users clickthrough to the publisher’s own web site, reports Technology Review:
Early on, Facebook performed little quality control on the types of publishers joining the program. The platform’s design also didn’t sufficiently penalize users for posting identical content across Facebook pages — in fact, it rewarded the behavior. Posting the same article on multiple pages could as much as double the number of users who clicked on it and generated ad revenue. Clickbait farms around the world seized on this flaw as a strategy — one they still use today… Clickbait actors cropped up in Myanmar overnight. With the right recipe for producing engaging and evocative content, they could generate thousands of U.S. dollars a month in ad revenue, or 10 times the average monthly salary — paid to them directly by Facebook. An internal company document, first reported by MIT Technology Review in October, shows that Facebook was aware of the problem as early as 2019… At one point, as many as 60% of the domains enrolled in Instant Articles were using the spammy writing tactics employed by clickbait farms, the report said…

75% of users who were exposed to clickbait content from farms run in Macedonia and Kosovo had never followed any of the pages. Facebook’s content-recommendation system had instead pushed it into their news feeds.

Technology Review notes that Facebook now pays billions of dollars to the publishers in their program. It’s a long and detailed article, which ultimately concludes that the problem “is now happening on a global scale.”
Thousands of clickbait operations have sprung up, primarily in countries where Facebook’s payouts provide a larger and steadier source of income than other forms of available work. Some are teams of people while others are individuals, abetted by cheap automated tools that help them create and distribute articles at mass scale…

Google is also culpable. Its AdSense program fueled the Macedonia- and Kosovo-based farms that targeted American audiences in the lead-up to the 2016 presidential election. And it’s AdSense that is incentivizing new clickbait actors on YouTube to post outrageous content and viral misinformation.

Reached for comment, a Facebook spokesperson told Technology Review that they’d misunderstood the issue. And the spokesperson also said “we’ve invested in building new expert-driven and scalable solutions to these complex issues for many years, and will continue doing so.”

Google’s spokesperson confirmed examples in the article violated their own policies and removed the content, adding “We work hard to protect viewers from clickbait or misleading content across our platforms and have invested heavily in systems that are designed to elevate authoritative information.”

297

During COP26, Facebook served ads with climate falsehoods, skepticism

Days after Facebook’s vice president of global affairs, Nick Clegg, touted the company’s efforts to combat climate misinformation in a blog as the Glasgow summit began, conservative media network Newsmax ran an ad on Facebook (FB.O) that called man-made global warming a “hoax.” The ad, which had multiple versions, garnered more than 200,000 views.

In another, conservative commentator Candace Owens said, “apparently we’re just supposed to trust our new authoritarian government” on climate science, while a U.S. libertarian think-tank ran an ad on how “modern doomsayers” had been wrongly predicting climate crises for decades. Newsmax, Owens and the Daily Wire, which paid for the ad from Owens’s page, did not respond to requests for comment. Facebook, which recently changed its name to Meta, does not have a specific policy on climate misinformation in ads or unpaid posts. Alphabet’s Google said last month it would no longer allow ads that contradict scientific consensus on climate change on YouTube and its other services, though it would allow content that discusses false claims.

327

How Facebook and Google fund global misinformation

MIT’s Technology Review shares data from a Facebook-run tool called CrowdTangle. It shows that by 2018 in the nation of Myanmar (population: 53 million), ” All the engagement had instead gone to fake news and clickbait websites.

“In a country where Facebook is synonymous with the internet, the low-grade content overwhelmed other information sources.”
[T]he sheer volume of fake news and clickbait acted like fuel on the flames of already dangerously high ethnic and religious tensions. It shifted public opinion and escalated the conflict, which ultimately led to the death of 10,000 Rohingya, by conservative estimates, and the displacement of 700,000 more. In 2018, a United Nations investigation determined that the violence against the Rohingya constituted a genocide and that Facebook had played a “determining role” in the atrocities. Months later, Facebook admitted it hadn’t done enough “to help prevent our platform from being used to foment division and incite offline violence.” Over the last few weeks, the revelations from the Facebook Papers, a collection of internal documents provided to Congress and a consortium of news organizations by whistleblower Frances Haugen, have reaffirmed what civil society groups have been saying for years: Facebook’s algorithmic amplification of inflammatory content, combined with its failure to prioritize content moderation outside the US and Europe, has fueled the spread of hate speech and misinformation, dangerously destabilizing countries around the world.

But there’s a crucial piece missing from the story. Facebook isn’t just amplifying misinformation.

The company is also funding it.

An MIT Technology Review investigation, based on expert interviews, data analyses, and documents that were not included in the Facebook Papers, has found that Facebook and Google are paying millions of ad dollars to bankroll clickbait actors, fueling the deterioration of information ecosystems around the world.

Facebook pays them for permission to open their content within Facebook’s app (where Facebook controls the advertising) rather than having users clickthrough to the publisher’s own web site, reports Technology Review:
Early on, Facebook performed little quality control on the types of publishers joining the program. The platform’s design also didn’t sufficiently penalize users for posting identical content across Facebook pages — in fact, it rewarded the behavior. Posting the same article on multiple pages could as much as double the number of users who clicked on it and generated ad revenue. Clickbait farms around the world seized on this flaw as a strategy — one they still use today… Clickbait actors cropped up in Myanmar overnight. With the right recipe for producing engaging and evocative content, they could generate thousands of U.S. dollars a month in ad revenue, or 10 times the average monthly salary — paid to them directly by Facebook. An internal company document, first reported by MIT Technology Review in October, shows that Facebook was aware of the problem as early as 2019… At one point, as many as 60% of the domains enrolled in Instant Articles were using the spammy writing tactics employed by clickbait farms, the report said…

75% of users who were exposed to clickbait content from farms run in Macedonia and Kosovo had never followed any of the pages. Facebook’s content-recommendation system had instead pushed it into their news feeds.

Technology Review notes that Facebook now pays billions of dollars to the publishers in their program. It’s a long and detailed article, which ultimately concludes that the problem “is now happening on a global scale.”
Thousands of clickbait operations have sprung up, primarily in countries where Facebook’s payouts provide a larger and steadier source of income than other forms of available work. Some are teams of people while others are individuals, abetted by cheap automated tools that help them create and distribute articles at mass scale…

Google is also culpable. Its AdSense program fueled the Macedonia- and Kosovo-based farms that targeted American audiences in the lead-up to the 2016 presidential election. And it’s AdSense that is incentivizing new clickbait actors on YouTube to post outrageous content and viral misinformation.

Reached for comment, a Facebook spokesperson told Technology Review that they’d misunderstood the issue. And the spokesperson also said “we’ve invested in building new expert-driven and scalable solutions to these complex issues for many years, and will continue doing so.”

Google’s spokesperson confirmed examples in the article violated their own policies and removed the content, adding “We work hard to protect viewers from clickbait or misleading content across our platforms and have invested heavily in systems that are designed to elevate authoritative information.”

309

Facebook Has Trackers in 25% of Websites and 61% of the Most Popular Apps

Megan Borovicka forgot all about her Facebook account after 2013, reports the Washington Post. “But Facebook never forgot about her.”
The 42-year-old Oakland, California, lawyer never picked any “friends,” posted any status updates, liked any photos or even opened the Facebook app on her phone. Yet over the last decade, Facebook has used an invisible data vacuum to suction up very specific details about her life — from her brand of underwear to where she received her paycheck… It isn’t just the Facebook app that’s gobbling up your information. Facebook is so big, it has convinced millions of other businesses, apps and websites to also snoop on its behalf. Even when you’re not actively using Facebook. Even when you’re not online. Even, perhaps, if you’ve never had a Facebook account.

Here’s how it works: Facebook provides its business partners tracking software they embed in apps, websites and loyalty programs. Any business or group that needs to do digital advertising has little choice but to feed your activities into Facebook’s vacuum: your grocer, politicians and, yes, even the paywall page for this newspaper’s website. Behind the scenes, Facebook takes in this data and tries to match it up to your account. It sits under your name in a part of your profile your friends can’t see, but Facebook uses to shape your experience online. Among the 100 most popular smartphone apps, you can find Facebook software in 61 of them, app research firm Sensor Tower told me. Facebook also has trackers in about 25 percent of websites, according to privacy software maker Ghostery…

Facebook got a notice when I opened Hulu to watch TV. Facebook knew when I went shopping for paint, a rocking chair and fancy beans. Facebook learned I read the websites What To Expect, Lullaby Trust and Happiest Baby. Over two weeks, Facebook tracked me on at least 95 different apps, websites and businesses, and those are just the ones I know about. It was as if Facebook had hired a private eye to prepare a dossier about my life. Why does Facebook think that’s okay? The company emailed me answers about how its tracking technology works, but declined my requests to interview its chief privacy officer or other executives about its alleged monopoly….

Who in their right mind thought they were signing up for this much surveillance back when they first joined Facebook?

The article points out that in 2014 Facebook began allowing its advertisers to target users based on websites they’d visited…and now also gathers more data about users from other companies. And “While many companies were using browser cookies, which could be easily cleared or blocked, Facebook tied what it learned to real identities — the names on our Facebook profiles.” And beyond that, companies “can report other identifying information to Facebook like your email to help it figure out who you are… If you’ve never had a Facebook account at all? It may still be watching.”

It’s a lucrative business, the Post points out. “In 2013, the average American’s data was worth about $19 per year in advertising sales to Facebook, according to its financial statements. In 2020, your data was worth $164 per year.”

What does Facebook know about your off-Facebook activity? You can find out at this URL.

If you just want to stop them from giving this information to advertisers, the right side of that page has an option to “Clear History — Disconnect off-Facebook activity history from your account.” But you then have to also click “More Options” and then “Manage Future Activity” to also stop them from later matching up more of your off-Facebook activity to your profile for advertisers.

If you try to select it, Facebook warns what you’ll be missing — that “Keeping your future off-Facebook activity saved with your account allows us to personalize your experience.” And proceeding anyways then generates a popup reminding you that “We’ll still receive activity from the businesses and organizations you visit. It may be used for measurement purposes and to make improvements to our ads systems, but it will be disconnected from your account.”

And apparently your activity on Oculus isn’t covered, and will still remain connected to your Facebook account.

349

Facebook is Ditching Plans To Make an Interface That Reads the Brain

The spring of 2017 may be remembered as the coming-out party for Big Tech’s campaign to get inside your head. That was when news broke of Elon Musk’s new brain-interface company, Neuralink, which is working on how to stitch thousands of electrodes into people’s brains. Days later, Facebook joined the quest when it announced that its secretive skunkworks, named Building 8, was attempting to build a headset or headband that would allow people to send text messages by thinking — tapping them out at 100 words per minute. The company’s goal was a hands-free interface anyone could use in virtual reality. “What if you could type directly from your brain?” asked Regina Dugan, a former DARPA officer who was then head of the Building 8 hardware dvision. “It sounds impossible, but it’s closer than you realize.”

Now the answer is in — and it’s not close at all. Four years after announcing a “crazy amazing” project to build a “silent speech” interface using optical technology to read thoughts, Facebook is shelving the project, saying consumer brain-reading still remains very far off. In a blog post, Facebook said it is discontinuing the project and will instead focus on an experimental wrist controller for virtual reality that reads muscle signals in the arm. “While we still believe in the long-term potential of head-mounted optical [brain-computer interface] technologies, we’ve decided to focus our immediate efforts on a different neural interface approach that has a nearer-term path to market,” the company said. Facebook’s brain-typing project had led it into uncharted territory — including funding brain surgeries at a California hospital and building prototype helmets that could shoot light through the skull — and into tough debates around whether tech companies should access private brain information. Ultimately, though, the company appears to have decided the research simply won’t lead to a product soon enough.

332

Facebook Used Facial Recognition Without Consent 200K Times, Says Watchdog

Another [$22,000] penalty was issued for illegally collecting social security numbers, not issuing notifications regarding personal information management changes, and other missteps. Facebook has been ordered to destroy facial information collected without consent or obtain consent, and was prohibited from processing identity numbers without legal basis. It was also ordered to destroy collected data and disclose contents related to foreign migration of personal information. Zuck’s brainchild was then told to make it easier for users to check legal notices regarding personal information. The fine is the second-largest ever issued by the organization, the largest ever also going to Facebook. In November 2020 the Social Network was fined [$5.7 million] for passing on personal data to other operators without user permission.

Netflix’s fine was a paltry [$188,000], with that sum imposed for collecting data from five million people without their consent, plus another [$2,700] for not disclosing international transfer of the data. Google got off the easiest, with just a “recommendation” to improve its personal data handling processes and make legal notices more precise. The PPIC said it is not done investigating methods of collecting personal information from overseas businesses and will continue with a legal review.

273

What Happens When Big Tech’s Datacenters Come to Small Towns?

Few big tech companies that are building and hiring across America bring that wealth with them when they set up in new communities. Instead, they hire armies of low-paid contractors, many of whom are not guaranteed a job from one month to the next; some of the contracting companies have a history of alleged mistreatment of workers. Nor do local governments share in the companies’ wealth; instead, the tech giants negotiate deals — the details protected by non-disclosure agreements — that exempt them from paying taxes that would fund schools, roads and fire departments….

Globally, by the end of 2020, there were nearly 600 “hyperscale” data centers, where a single company runs thousands of servers and rents out cloud space to customers. That’s more than double the number from 2015. Amazon, Google and Microsoft account for more than half of those hyperscale centers, making data centers one more field dominated by America’s richest and biggest companies… Google in March said it was “investing in America” with a plan to spend $7 billion across 19 states to build more data centers and offices. Microsoft said in April that it plans to build 50 to 100 data centers each year for the foreseeable future. Amazon recently got approval to build 1.75 million square feet of data-center space in Northern Virginia, beyond the 50 data centers it already operates there. Facebook said this year it would spend billions to expand data centers in Iowa, Georgia and Utah; in March it said it was adding an 11th building to its largest data-center facility in rural Prineville, Oregon…

Facebook has spent more than $2 billion expanding its operations in Prineville, but because of the tax incentives it negotiated with local officials, the company paid a total of just $119,403.42 in taxes to Crook County last year, according to the County Assessor’s list of top taxpayers. That’s less than half the taxes paid by Brasada Ranch, a local resort. And according to the Oregon Bureau of Labor and Industries, the data center has been the subject of numerous labor complaints… “I’ve spent way too much of my life watching city councils say, ‘We need a big tech company to show that we’re future-focused,'” says Sebastian Moss, the editor of Data Center Dynamics, which tracks the industry. Towns will give away tax breaks worth hundreds of millions of dollars, his reporting has found, and then express gratitude toward tech companies that have donated a few thousand computers — worth a fraction of the tax breaks — to their cash-strapped school systems. “I sometimes wonder if they’re preying on desperation, going to places that are struggling.”

Communities give up more than tax breaks when they welcome tech companies. Data centers use huge amounts of water to cool computer equipment, yet they’re being built in the drought-stricken American West.

The article cites Bureau of Labor Statistics showing that 373,300 Americans were working in data processing, hosting, and related services in June — up 52% from 10 years ago.

280

Facebook Tests Prompts That Ask Users If They’re Worried a Friend is ‘Becoming an Extremist’

Some Facebook users in the United States are being served a prompt that asks if they are worried that someone they know might be becoming an extremist. Others are being notified that they may have been exposed to extremist content.

It is all part of a test the social media company is running that stems from its Redirect Initiative, which aims to combat violent extremism, Andy Stone, a Facebook spokesperson, told CNN. Screen shots of the alerts surfaced on social media Thursday. “This test is part of our larger work to assess ways to provide resources and support to people on Facebook who may have engaged with or were exposed to extremist content, or may know someone who is at risk,” Stone said. “We are partnering with NGOs and academic experts in this space and hope to have more to share in the future,” Stone added. One of the alerts, a screen grab of which made the rounds on social media Thursday, asks users, “Are you concerned that someone you know is becoming an extremist?” “We care about preventing extremism on Facebook,” explained that alert, according to a screen grab posted on social media. “Others in your situation have received confidential support.”

332

The Facebook Loophole that Lets World Leaders Deceive and Harass Their Citizens

Facebook has repeatedly allowed world leaders and politicians to use its platform to deceive the public or harass opponents despite being alerted to evidence of the wrongdoing. The Guardian has seen extensive internal documentation showing how Facebook handled more than 30 cases across 25 countries of politically manipulative behavior that was proactively detected by company staff. The investigation shows how Facebook has allowed major abuses of its platform in poor, small and non-western countries in order to prioritize addressing abuses that attract media attention or affect the US and other wealthy countries. The company acted quickly to address political manipulation affecting countries such as the US, Taiwan, South Korea and Poland, while moving slowly or not at all on cases in Afghanistan, Iraq, Mongolia, Mexico and much of Latin America.

408

Australia Sues Facebook Over Its Use of Onavo To Snoop

Australia’s Competition and Consumer Commission (ACCC) is suing Facebook over its use, in 2016 and 2017, of the Onavo VPN app to spy on users for commercial purposes. From a report:
The ACCC’s case accuses Facebook of false, misleading or deceptive conduct toward thousands of Australian consumers, after it had promoted the Onavo Protect app — saying it would keep users personal activity data private, protected and secret and not use it for any other purpose, when it was being used to gather data to help Facebook’s business. “Through Onavo Protect, Facebook was collecting and using the very detailed and valuable personal activity data of thousands of Australian consumers for its own commercial purposes, which we believe is completely contrary to the promise of protection, secrecy and privacy that was central to Facebook’s promotion of this app,” said ACCC chair Rod Sims in a statement. “Consumers often use VPN services because they care about their online privacy, and that is what this Facebook product claimed to offer. In fact, Onavo Protect channelled significant volumes of their personal activity data straight back to Facebook.”

432