Archives 2021

Mitto Secret Surveillance Operation for Google, Twitter, WhatsApp, Microsoft’s LinkedIn, Telegram, TikTok, Tencent and Alibaba

The co-founder of a company that has been trusted by technology giants including Google and Twitter to deliver sensitive passwords to millions of their customers also operated a service that ultimately helped governments secretly surveil and track mobile phones, Bloomberg reported Monday, citing former employees and clients. From the report:
Since it started in 2013, Mitto AG has established itself as a provider of automated text messages for such things as sales promotions, appointment reminders and security codes needed to log in to online accounts, telling customers that text messages are more likely to be read and engaged with than emails as part of their marketing efforts. Mitto, a closely held company with headquarters in Zug, Switzerland, has grown its business by establishing relationships with telecom operators in more than 100 countries. It has brokered deals that gave it the ability to deliver text messages to billions of phones in most corners of the world, including countries that are otherwise difficult for Western companies to penetrate, such as Iran and Afghanistan. Mitto has attracted major technology giants as customers, including Google, Twitter, WhatsApp, Microsoft’s LinkedIn and messaging app Telegram, in addition to China’s TikTok, Tencent and Alibaba, according to Mitto documents and former employees.

But a Bloomberg News investigation, carried out in collaboration with the London-based Bureau of Investigative Journalism, indicates that the company’s co-founder and chief operating officer, Ilja Gorelik, was also providing another service: selling access to Mitto’s networks to secretly locate people via their mobile phones. That Mitto’s networks were also being used for surveillance work wasn’t shared with the company’s technology clients or the mobile operators Mitto works with to spread its text messages and other communications, according to four former Mitto employees. The existence of the alternate service was known only to a small number of people within the company, these people said. Gorelik sold the service to surveillance-technology companies which in turn contracted with government agencies, according to the employees.

Is the Internet Changing the Way We Remember?

“A study in 2019 found that the spatial memory used for navigating through the world tends to be worse for people who’ve made extensive use of map apps and GPS devices…” reports NBC News.

But that’s just the beginning, according to Adrian Ward, who studies psychology at the University of Texas at Austin. NBC says Ward’s research suggests “People who lean on a search engine such as Google may get the right answers but they can also end up with a wrong idea of how strong their own memory is.”

In Ward’s research, published in October in the journal Proceedings of the National Academy of Sciences of the United States, he used a series of eight experiments to test how people used and thought about their own knowledge as they completed short quizzes of general knowledge. Some participants had access to Google while answering the questions — “What is the most widely spoken language in the world?” was one — while others did not. They also completed surveys. He found that people who used Google were more confident in their own ability to think and remember, and erroneously predicted that they would know significantly more in future quizzes without the help of the internet. Ward attributed that to Google’s design: simple and easy, less like a library and more like a “neural prosthetic” that simulates a search in a human brain.

“The speed makes it so you never understand what you don’t know,” Ward said.

The findings echo and build on earlier research, including a widely cited 2011 paper on the “Google effect”: a phenomenon in which people are less likely to remember information if they know they can find it later on the internet…. In a review of recent studies in the field, published in September, researchers at Duke University found that the “externalization” of memories into digital spheres “changes what people attend to and remember about their own experiences.” Digital media is new and different, they wrote, because of factors such as how easily images are edited or the huge number of memories at people’s fingertips.

Each photographic cue means another chance for a memory to be “updated,” maybe with a false impression, and each manipulation of a piece of social media content is a chance for distortion, wrote the researchers, doctoral student Emmaline Drew Eliseev and Elizabeth Marsh, a professor of psychology and neuroscience and director of a lab dedicated to studying memory.

Facebook exec blames society for COVID misinformation

Longtime Facebook veteran Andrew Bosworth insists that political and COVID-19 misinformation are societal problems rather than issues that have been magnified by social networks.

Facebook and other social networks have played a significant role in vaccine hesitancy and the spread of political misinformation. “Individual humans are the ones who choose to believe or not believe a thing. They are the ones who choose to share or not share a thing,” Bosworth said in an interview with “Axios on HBO.” “I don’t feel comfortable at all saying they don’t have a voice because I don’t like what they said.” Bosworth has been leading Facebook’s hardware efforts, including those in virtual and augmented reality. Next year he will become CTO for Meta, Facebook’s parent company. Asked whether vaccine hesitancy would be the same with or without social media, Bosworth defended Facebook’s role in combatting COVID, noting that the company ran one of the largest information campaigns in the world to spread authoritative information.

How Facebook and Google Actually Fund the Creation of Misinformation

MIT’s Technology Review shares data from a Facebook-run tool called CrowdTangle. It shows that by 2018 in the nation of Myanmar (population: 53 million), ” All the engagement had instead gone to fake news and clickbait websites.

“In a country where Facebook is synonymous with the internet, the low-grade content overwhelmed other information sources.”
[T]he sheer volume of fake news and clickbait acted like fuel on the flames of already dangerously high ethnic and religious tensions. It shifted public opinion and escalated the conflict, which ultimately led to the death of 10,000 Rohingya, by conservative estimates, and the displacement of 700,000 more. In 2018, a United Nations investigation determined that the violence against the Rohingya constituted a genocide and that Facebook had played a “determining role” in the atrocities. Months later, Facebook admitted it hadn’t done enough “to help prevent our platform from being used to foment division and incite offline violence.” Over the last few weeks, the revelations from the Facebook Papers, a collection of internal documents provided to Congress and a consortium of news organizations by whistleblower Frances Haugen, have reaffirmed what civil society groups have been saying for years: Facebook’s algorithmic amplification of inflammatory content, combined with its failure to prioritize content moderation outside the US and Europe, has fueled the spread of hate speech and misinformation, dangerously destabilizing countries around the world.

But there’s a crucial piece missing from the story. Facebook isn’t just amplifying misinformation.

The company is also funding it.

An MIT Technology Review investigation, based on expert interviews, data analyses, and documents that were not included in the Facebook Papers, has found that Facebook and Google are paying millions of ad dollars to bankroll clickbait actors, fueling the deterioration of information ecosystems around the world.

Facebook pays them for permission to open their content within Facebook’s app (where Facebook controls the advertising) rather than having users clickthrough to the publisher’s own web site, reports Technology Review:
Early on, Facebook performed little quality control on the types of publishers joining the program. The platform’s design also didn’t sufficiently penalize users for posting identical content across Facebook pages — in fact, it rewarded the behavior. Posting the same article on multiple pages could as much as double the number of users who clicked on it and generated ad revenue. Clickbait farms around the world seized on this flaw as a strategy — one they still use today… Clickbait actors cropped up in Myanmar overnight. With the right recipe for producing engaging and evocative content, they could generate thousands of U.S. dollars a month in ad revenue, or 10 times the average monthly salary — paid to them directly by Facebook. An internal company document, first reported by MIT Technology Review in October, shows that Facebook was aware of the problem as early as 2019… At one point, as many as 60% of the domains enrolled in Instant Articles were using the spammy writing tactics employed by clickbait farms, the report said…

75% of users who were exposed to clickbait content from farms run in Macedonia and Kosovo had never followed any of the pages. Facebook’s content-recommendation system had instead pushed it into their news feeds.

Technology Review notes that Facebook now pays billions of dollars to the publishers in their program. It’s a long and detailed article, which ultimately concludes that the problem “is now happening on a global scale.”
Thousands of clickbait operations have sprung up, primarily in countries where Facebook’s payouts provide a larger and steadier source of income than other forms of available work. Some are teams of people while others are individuals, abetted by cheap automated tools that help them create and distribute articles at mass scale…

Google is also culpable. Its AdSense program fueled the Macedonia- and Kosovo-based farms that targeted American audiences in the lead-up to the 2016 presidential election. And it’s AdSense that is incentivizing new clickbait actors on YouTube to post outrageous content and viral misinformation.

Reached for comment, a Facebook spokesperson told Technology Review that they’d misunderstood the issue. And the spokesperson also said “we’ve invested in building new expert-driven and scalable solutions to these complex issues for many years, and will continue doing so.”

Google’s spokesperson confirmed examples in the article violated their own policies and removed the content, adding “We work hard to protect viewers from clickbait or misleading content across our platforms and have invested heavily in systems that are designed to elevate authoritative information.”

Notifications Are Driving Us Crazy

We’re on alert overload. Stray comments and offhand requests once shouted across the office now blink and buzz at us from Microsoft Teams and Slack. Our communication has grown fragmented, spread across myriad apps we have to learn, conform to, remember to check.

Meanwhile, personal texts and social-media mentions have bled into the workday after all this time at home, adding another layer of distraction to our time on the clock. Why put your phone on silent if the boss isn’t hovering over you? Our culture has evolved to accommodate rapid communication, says Gloria Mark, a professor of informatics at the University of California, Irvine, and it can be mentally taxing. Many of us struggle to conjure up that brilliant thought that hit right before the notification burst in. “Your memory is just overflowing with information,” she says.

It doesn’t make for great circumstances for getting work done, but there are ways individuals, managers and organizations can contend with the onslaught. Dr. Mark’s research finds people switch screens an average of 566 times a day. Half the time we’re interrupted; the other half we pull ourselves away. Breaks — even mindless ones like scrolling Facebook — can be positive, replenishing our cognitive resources, Dr. Mark says. But when something external diverts our focus, it takes us an average of 25 minutes and 26 seconds to get back to our original task, she has found. (Folks often switch to different projects in between.) And it stresses us out. Research using heart monitors shows that the interval between people’s heart beats becomes more regular when they’re interrupted, a sign they’re in fight-or-flight mode. The onus is on teams and organizations to create new norms, Dr. Mark says. If individuals just up and turn off their notifications they’ll likely be penalized for missing information. Instead, managers should create quiet hours where people aren’t expected to respond. “It’s a matter of relearning how to work,” she says.

How China Uses Western Influencers As Pawns In Its Propaganda War

China is recruiting YouTubers to report on the country in a positive light and counter the West’s increasingly negative perceptions. “The videos have a casual, homespun feel. But on the other side of the camera often stands a large apparatus of government organizers, state-controlled news media and other official amplifiers — all part of the Chinese government’s widening attempts to spread pro-Beijing messages around the planet,” the report says. “State-run news outlets and local governments have organized and funded pro-Beijing influencers’ travel, according to government documents and the creators themselves. They have paid or offered to pay the creators. They have generated lucrative traffic for the influencers by sharing videos with millions of followers on YouTube, Twitter and Facebook.”

Typically, the Chinese government support comes in the form of free organized trips around China, particularly in Xinjiang. By showing the influencers a carefully sanitized image of life in the country, the authorities don’t need to worry about negative stories. They simply make it easy for the YouTubers to present images of jolly peasants and happy city-dwellers, because that’s all they are allowed to see. One of the authors of the New York Times piece, Paul Mozur, noted on Twitter another important way that the authorities are able to help their influencer guests. Once produced, the China-friendly videos are boosted massively by state media and diplomatic Facebook and Twitter accounts: “One video by Israeli influencer Raz Gal-Or portraying Xinjiang as ‘totally normal’ was shared by 35 government connected accounts with a total of 400 million followers. Many were Chinese embassy Facebook accounts, which posted about the video in numerous languages.”

A new report from the Australian Strategic Policy Institute, “Borrowing mouths to speak on Xinjiang,” has some more statistics on this practice: “Our data collection has found that, between January 2020 and August 2021, 156 Chinese state-controlled accounts on US-based social media platforms have published at least 546 Facebook posts, Twitter posts and shared articles from [China Global Television Network], Global Times, Xinhua or China Daily websites that have amplified Xinjiang-related social media content from 13 influencer accounts. More than 50% of that activity occurred on Facebook.” Mozur says that the use of Western influencers in this way also allows employees of Beijing-controlled media, like the journalist Li Jingjing, to present themselves as independent YouTubers. On Twitter, however, she is labeled as “China state-affiliated media.” The Australian Strategic Policy Institute sees this as part of a larger problem (pdf): “labelling schemes adopted by some video-sharing and social media platforms to identify state-affiliated accounts are inconsistently applied to media outlets and journalists working for those outlets. In addition, few platforms appear to have clear policies on content from online influencers or vloggers whose content may be facilitated by state-affiliated media, through sponsored trips, for example.”

According to Mozur, China’s state broadcaster is actively looking for more influencers, offering bonuses and publicity for those who sign up. In the US, China’s consulate general is paying $300,000 to a firm to recruit influencers for the Winter Olympics, ranging from Celebrity Influencers with millions of Instagram or TikTok followers, to Nano Influencers, with merely a few thousand. The ultimate goal of deploying these alternative voices is not to disprove negative stories appearing in Western media, but something arguably worse, as the New York Times report explains: “China is the new super-abuser that has arrived in global social media,” said Eric Liu, a former content moderator for Chinese social media. “The goal is not to win, but to cause chaos and suspicion until there is no real truth.”

AI influencers are taking over

During COP26, Facebook served ads with climate falsehoods, skepticism

Days after Facebook’s vice president of global affairs, Nick Clegg, touted the company’s efforts to combat climate misinformation in a blog as the Glasgow summit began, conservative media network Newsmax ran an ad on Facebook (FB.O) that called man-made global warming a “hoax.” The ad, which had multiple versions, garnered more than 200,000 views.

In another, conservative commentator Candace Owens said, “apparently we’re just supposed to trust our new authoritarian government” on climate science, while a U.S. libertarian think-tank ran an ad on how “modern doomsayers” had been wrongly predicting climate crises for decades. Newsmax, Owens and the Daily Wire, which paid for the ad from Owens’s page, did not respond to requests for comment. Facebook, which recently changed its name to Meta, does not have a specific policy on climate misinformation in ads or unpaid posts. Alphabet’s Google said last month it would no longer allow ads that contradict scientific consensus on climate change on YouTube and its other services, though it would allow content that discusses false claims.

How Facebook and Google fund global misinformation

MIT’s Technology Review shares data from a Facebook-run tool called CrowdTangle. It shows that by 2018 in the nation of Myanmar (population: 53 million), ” All the engagement had instead gone to fake news and clickbait websites.

“In a country where Facebook is synonymous with the internet, the low-grade content overwhelmed other information sources.”
[T]he sheer volume of fake news and clickbait acted like fuel on the flames of already dangerously high ethnic and religious tensions. It shifted public opinion and escalated the conflict, which ultimately led to the death of 10,000 Rohingya, by conservative estimates, and the displacement of 700,000 more. In 2018, a United Nations investigation determined that the violence against the Rohingya constituted a genocide and that Facebook had played a “determining role” in the atrocities. Months later, Facebook admitted it hadn’t done enough “to help prevent our platform from being used to foment division and incite offline violence.” Over the last few weeks, the revelations from the Facebook Papers, a collection of internal documents provided to Congress and a consortium of news organizations by whistleblower Frances Haugen, have reaffirmed what civil society groups have been saying for years: Facebook’s algorithmic amplification of inflammatory content, combined with its failure to prioritize content moderation outside the US and Europe, has fueled the spread of hate speech and misinformation, dangerously destabilizing countries around the world.

But there’s a crucial piece missing from the story. Facebook isn’t just amplifying misinformation.

The company is also funding it.

An MIT Technology Review investigation, based on expert interviews, data analyses, and documents that were not included in the Facebook Papers, has found that Facebook and Google are paying millions of ad dollars to bankroll clickbait actors, fueling the deterioration of information ecosystems around the world.

Facebook pays them for permission to open their content within Facebook’s app (where Facebook controls the advertising) rather than having users clickthrough to the publisher’s own web site, reports Technology Review:
Early on, Facebook performed little quality control on the types of publishers joining the program. The platform’s design also didn’t sufficiently penalize users for posting identical content across Facebook pages — in fact, it rewarded the behavior. Posting the same article on multiple pages could as much as double the number of users who clicked on it and generated ad revenue. Clickbait farms around the world seized on this flaw as a strategy — one they still use today… Clickbait actors cropped up in Myanmar overnight. With the right recipe for producing engaging and evocative content, they could generate thousands of U.S. dollars a month in ad revenue, or 10 times the average monthly salary — paid to them directly by Facebook. An internal company document, first reported by MIT Technology Review in October, shows that Facebook was aware of the problem as early as 2019… At one point, as many as 60% of the domains enrolled in Instant Articles were using the spammy writing tactics employed by clickbait farms, the report said…

75% of users who were exposed to clickbait content from farms run in Macedonia and Kosovo had never followed any of the pages. Facebook’s content-recommendation system had instead pushed it into their news feeds.

Technology Review notes that Facebook now pays billions of dollars to the publishers in their program. It’s a long and detailed article, which ultimately concludes that the problem “is now happening on a global scale.”
Thousands of clickbait operations have sprung up, primarily in countries where Facebook’s payouts provide a larger and steadier source of income than other forms of available work. Some are teams of people while others are individuals, abetted by cheap automated tools that help them create and distribute articles at mass scale…

Google is also culpable. Its AdSense program fueled the Macedonia- and Kosovo-based farms that targeted American audiences in the lead-up to the 2016 presidential election. And it’s AdSense that is incentivizing new clickbait actors on YouTube to post outrageous content and viral misinformation.

Reached for comment, a Facebook spokesperson told Technology Review that they’d misunderstood the issue. And the spokesperson also said “we’ve invested in building new expert-driven and scalable solutions to these complex issues for many years, and will continue doing so.”

Google’s spokesperson confirmed examples in the article violated their own policies and removed the content, adding “We work hard to protect viewers from clickbait or misleading content across our platforms and have invested heavily in systems that are designed to elevate authoritative information.”

Uber and Lyft equals more pollution

The efficiencies of ride hailing were supposed to all but end car ownership; instead vehicle sales are on the rise again this year, after a down year in 2020. There is also evidence that Uber and Lyft may actually spur an increase in car sales in cities where they begin operating.

Passengers have largely eschewed pooled rides and public transit in favor of private trips, leading to downtown bottlenecks in cities like San Francisco.

The need for drivers to circle around waiting for or fetching their next passenger, known as deadheading. Deadheading, Lyft and Uber estimated in 2019, is equal to about 40 percent of rideshare miles driven in six American cities. The researchers at Carnegie Mellon estimated that driving without a passenger leads to a roughly 20 percent overall increase in fuel consumption and greenhouse gas emissions compared to trips made by personal vehicles.

The researchers also found that switching from a private car to on-demand rides, like an Uber or Lyft, increased the external costs of a typical trip by 30 to 35 percent, or roughly 35 cents on average, because of the added congestion, collisions and noise from ridesharing services.

Wind and Solar provide 10% of Global Electricity while coal emissions jump to new highs

… wind and solar energy combined to generate more than 10 percent of the world’s electricity in 2021 — though coal-fired power plant generation and emissions jumped to new highs in the same period, too

How Far Can You Go to Resist Filmers of a Viral Video?

Recently I saw eight seconds of video that capture this problem in its most extreme form. A boy and a girl, who appear to be of high school age, are walking into Panda Express when a third teenager with blond hair stops them in the doorway. He brings with him the energy of the hustler or the man-on-the-street interview host, and the couple are temporarily frozen, caught between suspicion and courtesy. It is a space where things could go either way. “Hey, hold on, excuse me — I have something really important to ask you,” the blond kid says to the girl. “The moment I saw you, my eyes were just — oh, my God, I love you, please could — bleagh!” The “bleagh” is the sound he makes when the other boy punches him in the face…

But perhaps what is most remarkable is the distinct moment of resignation that he and his girlfriend share when they realize what the blond kid is doing. Around the time he gets to “my eyes,” she turns away and steps inside, while Overalls Kid calmly sets his smoothie on the ground in preparation to jack his interlocutor in the mouth. The sound of the impact is meaty. The video ends with both of them stumbling out of the frame, Blond Kid reeling and Overalls Kid winding up for another blow. It’s an efficiently cut bit of action that rewards repeat viewings, but it left me with one question: How do we feel about that punch?

I think we can agree that a punch would not be justified if Blond Kid were professing his love sincerely. But he isn’t. He’s professing his love while an unidentified fourth party records the whole thing, presumably as part of the “hit on another guy’s girlfriend” internet challenge. In this context, he is using other people as props, a bad behavior that society should discourage. But what are we willing to condone in order to discourage it? Our collective culture has just begun to decide how we feel about this kind of activity, which has been invented by new technology and will only become more prevalent in the future.

The article ultimately argues that internet video apps belong to generation Z “the way heroin belongs to junkies. Seen from this perspective, Overalls Kid is part of a history of violent resistance to foreign influence that Americans will recognize in everything from the Boston Tea Party to Al Qaeda to the Ewoks.

“Our reams of fretting essays about how much the kids love phones tend to ignore who gave them phones in the first place.”

Facebook Has Trackers in 25% of Websites and 61% of the Most Popular Apps

Megan Borovicka forgot all about her Facebook account after 2013, reports the Washington Post. “But Facebook never forgot about her.”
The 42-year-old Oakland, California, lawyer never picked any “friends,” posted any status updates, liked any photos or even opened the Facebook app on her phone. Yet over the last decade, Facebook has used an invisible data vacuum to suction up very specific details about her life — from her brand of underwear to where she received her paycheck… It isn’t just the Facebook app that’s gobbling up your information. Facebook is so big, it has convinced millions of other businesses, apps and websites to also snoop on its behalf. Even when you’re not actively using Facebook. Even when you’re not online. Even, perhaps, if you’ve never had a Facebook account.

Here’s how it works: Facebook provides its business partners tracking software they embed in apps, websites and loyalty programs. Any business or group that needs to do digital advertising has little choice but to feed your activities into Facebook’s vacuum: your grocer, politicians and, yes, even the paywall page for this newspaper’s website. Behind the scenes, Facebook takes in this data and tries to match it up to your account. It sits under your name in a part of your profile your friends can’t see, but Facebook uses to shape your experience online. Among the 100 most popular smartphone apps, you can find Facebook software in 61 of them, app research firm Sensor Tower told me. Facebook also has trackers in about 25 percent of websites, according to privacy software maker Ghostery…

Facebook got a notice when I opened Hulu to watch TV. Facebook knew when I went shopping for paint, a rocking chair and fancy beans. Facebook learned I read the websites What To Expect, Lullaby Trust and Happiest Baby. Over two weeks, Facebook tracked me on at least 95 different apps, websites and businesses, and those are just the ones I know about. It was as if Facebook had hired a private eye to prepare a dossier about my life. Why does Facebook think that’s okay? The company emailed me answers about how its tracking technology works, but declined my requests to interview its chief privacy officer or other executives about its alleged monopoly….

Who in their right mind thought they were signing up for this much surveillance back when they first joined Facebook?

The article points out that in 2014 Facebook began allowing its advertisers to target users based on websites they’d visited…and now also gathers more data about users from other companies. And “While many companies were using browser cookies, which could be easily cleared or blocked, Facebook tied what it learned to real identities — the names on our Facebook profiles.” And beyond that, companies “can report other identifying information to Facebook like your email to help it figure out who you are… If you’ve never had a Facebook account at all? It may still be watching.”

It’s a lucrative business, the Post points out. “In 2013, the average American’s data was worth about $19 per year in advertising sales to Facebook, according to its financial statements. In 2020, your data was worth $164 per year.”

What does Facebook know about your off-Facebook activity? You can find out at this URL.

If you just want to stop them from giving this information to advertisers, the right side of that page has an option to “Clear History — Disconnect off-Facebook activity history from your account.” But you then have to also click “More Options” and then “Manage Future Activity” to also stop them from later matching up more of your off-Facebook activity to your profile for advertisers.

If you try to select it, Facebook warns what you’ll be missing — that “Keeping your future off-Facebook activity saved with your account allows us to personalize your experience.” And proceeding anyways then generates a popup reminding you that “We’ll still receive activity from the businesses and organizations you visit. It may be used for measurement purposes and to make improvements to our ads systems, but it will be disconnected from your account.”

And apparently your activity on Oculus isn’t covered, and will still remain connected to your Facebook account.

10 US Government Agencies Plan Expanded Use of Facial Recognition

The Washington Post reports that the U.S. government “plans to expand its use of facial recognition to pursue criminals and scan for threats, an internal survey has found, even as concerns grow about the technology’s potential for contributing to improper surveillance and false arrests.”
Ten federal agencies — the departments of Agriculture, Commerce, Defense, Homeland Security, Health and Human Services, Interior, Justice, State, Treasury and Veterans Affairs — told the Government Accountability Office they intend to grow their facial recognition capabilities by 2023, the GAO said in a report posted to its website Tuesday. Most of the agencies use face-scanning technology so employees can unlock their phones and laptops or access buildings, though a growing number said they are using the software to track people and investigate crime. The Department of Agriculture, for instance, said it wants to use it to monitor live surveillance feeds at its facilities and send an alert if it spots any faces also found on a watch list…

The GAO said in June that 20 federal agencies have used either internally developed or privately run facial recognition software, even though 13 of those agencies said they did not “have awareness” of which private systems they used and had therefore “not fully assessed the potential risks … to privacy and accuracy.” In the current report, the GAO said several agencies, including the Justice Department, the Air Force and Immigration and Customs Enforcement, reported that they had used facial recognition software from Clearview AI, a firm that has faced lawsuits from privacy groups and legal demands from Google and Facebook after it copied billions of facial images from social media without their approval… Many federal agencies said they used the software by requesting that officials in state and local governments run searches on their own software and report the results. Many searches were routed through a nationwide network of “fusion centers,” which local police and federal investigators use to share information on potential threats or terrorist attacks…

U.S. Customs and Border Protection officials, who have called the technology “the way of the future,” said earlier this month that they had run facial recognition scans on more than 88 million travelers at airports, cruise ports and border crossings. The systems, the officials said, have detected 850 impostors since 2018 — or about 1 in every 103,000 faces scanned.

QR codes replace service staff as pandemic spurs automation in US

American workers in manufacturing plants and distribution centres have long worried that their employers would find ways to replace them with robots and artificial intelligence, but the Covid-19 crisis has brought that threat to service workers, too. Businesses are increasingly turning to automated tools for customer service tasks long done by low-wage staff. But rather than robots, it is the ubiquitous QR matrix bar codes that are replacing humans [Editor’s note: the link may be paywalled]. Financial Times:
Many restaurants have begun to experiment with QR codes and order management systems such as Toast that allow diners to order food to their table from their phones instead of with human servers. Grocery stores have increased their investments in self-checkout kiosks that replace human cashiers, and more convenience stores including Circle K are experimenting with the computer vision technology pioneered by Amazon Go to allow customers to make purchases without standing in a checkout line at all. The shifts mean that some of the 1.7m leisure and hospitality jobs and 270,000 retail jobs the US economy has lost since its February 2020 high are unlikely to return.

Seemingly Normal Lightning Cable Will Leak Everything You Type

It looks like a Lightning cable, it works like a Lightning cable, and I can use it to connect my keyboard to my Mac. But it is actually a malicious cable that can record everything I type, including passwords, and wirelessly send that data to a hacker who could be more than a mile away. This is the new version of a series of penetration testing tools made by the security researcher known as MG. MG previously demoed an earlier version of the cables for Motherboard at the DEF CON hacking conference in 2019. Shortly after that, MG said he had successfully moved the cables into mass production, and cybersecurity vendor Hak5 started selling the cables. But the more recent cables come in new physical variations, including Lightning to USB-C, and include more capabilities for hackers to play with.

“There were people who said that Type C cables were safe from this type of implant because there isn’t enough space. So, clearly, I had to prove that wrong. :),” MG told Motherboard in an online chat. The OMG Cables, as they’re called, work by creating a Wi-Fi hotspot itself that a hacker can connect to from their own device. From here, an interface in an ordinary web browser lets the hacker start recording keystrokes. The malicious implant itself takes up around half the length of the plastic shell, MG said. MG said that the new cables now have geofencing features, where a user can trigger or block the device’s payloads based on the physical location of the cable. “It pairs well with the self-destruct feature if an OMG Cable leaves the scope of your engagement and you do not want your payloads leaking or being accidentally run against random computers,” he said. “We tested this out in downtown Oakland and were able to trigger payloads at over 1 mile,” he added. He said that the Type C cables allow the same sort of attacks to be carried out against smartphones and tablets. Various other improvements include being able to change keyboard mappings, the ability to forge the identity of specific USB devices, such as pretending to be a device that leverages a particular vulnerability on a system.

Weapon That “Stops You From Talking”

The U.S. Navy has successfully invented a special electronic device that is designed to stop people from talking. Interesting Engineering reports:
A form of non-lethal weapon, the new electronic device effectively repeats a speaker’s own voice back at them, and only them, while they attempt to talk. It was developed, and patented back in 2019 but has only recently been discovered, according to a report by the New Scientist. The main idea of the weapon is to disorientate a target so much that they will be unable to communicate effectively with other people.

Called acoustic hailing and disruption (AHAD), the weapon is able to record speech and instantly broadcast it at a target in milliseconds. Much like an annoying sibling, this action will disrupt the target’s concentration, and, in theory, discourage them from continuing to speak. As for the technical details of the device, a quick review of its patent is very interesting indeed. “According to an illustrative embodiment of the present disclosure, a target’s speech is directed back to them twice, once immediately and once after a short delay. This delay creates delayed auditory feedback (DAF), which alters the speaker’s normal perception of their own voice. In normal speech, a speaker hears their own words with a slight delay, and the body is accustomed to this feedback. By introducing another audio feedback source with a sufficiently long delay, the speaker’s concentration is disrupted and it becomes difficult to continue speaking.”

Investigation Reveals Widespread Cellphone Surveillance of the Innocent

Cellphones “can be transformed into surveillance devices,” writes the Guardian, reporting startling new details about which innocent people are still being surveilled (as part of a collaborative reporting project with 16 other media outlets led by the French nonprofit Forbidden Stories).

Long-time Slashdot reader shanen shared the newspaper’s critique of a “privatised government surveillance industry” that’s made NSO a billion-dollar company, thanks to its phone-penetrating spy software Pegaus:
[NSO] insists only carefully vetted government intelligence and law enforcement agencies can use Pegasus, and only to penetrate the phones of “legitimate criminal or terror group targets”. Yet in the coming days the Guardian will be revealing the identities of many innocent people who have been identified as candidates for possible surveillance by NSO clients in a massive leak of data… The presence of their names on this list indicates the lengths to which governments may go to spy on critics, rivals and opponents.

First we reveal how journalists across the world were selected as potential targets by these clients prior to a possible hack using NSO surveillance tools. Over the coming week we will be revealing the identities of more people whose phone numbers appear in the leak. They include lawyers, human rights defenders, religious figures, academics, businesspeople, diplomats, senior government officials and heads of state. Our reporting is rooted in the public interest. We believe the public should know that NSO’s technology is being abused by the governments who license and operate its spyware.

But we also believe it is in the public interest to reveal how governments look to spy on their citizens and how seemingly benign processes such as HLR lookups [which track the general locations of cellphone users] can be exploited in this environment.

It is not possible to know without forensic analysis whether the phone of someone whose number appears in the data was actually targeted by a government or whether it was successfully hacked with NSO’s spyware. But when our technical partner, Amnesty International’s Security Lab, conducted forensic analysis on dozens of iPhones that belonged to potential targets at the time they were selected, they found evidence of Pegasus activity in more than half.

The investigators say that potential targets included nearly 200 journalists around the world, including numerous reporters from CNN, the Associated Press, Voice of America, the New York Times, the Wall Street Journal, Bloomberg News, Le Monde in France, and even the editor of the Financial Times.

In addition, the investigators say they found evidence the Pegasus software had been installed on the phone of the fiancée of murdered Saudi journalist Jamal Khashoggi. NSO denies this to the Washington Post. But they also insist that they’re simply licensing their software to clients, and their company “has no insight” into those clients’ specific intelligence activities.

The Washington Post reports that Amnesty’s Security Lab found evidence of Pegasus attacks on 37 of 67 smartphones from the list which they tested. But beyond that “for the remaining 30, the tests were inconclusive, in several cases because the phones had been replaced. Fifteen of the phones were Android devices, none of which showed evidence of successful infection. However, unlike iPhones, Androids do not log the kinds of information required for Amnesty’s detective work.”

Familiar privacy measures like strong passwords and encryption offer little help against Pegasus, which can attack phones without any warning to users. It can read anything on a device that a user can, while also stealing photos, recordings, location records, communications, passwords, call logs and social media posts. Spyware also can activate cameras and microphones for real-time surveillance.

Edward Snowden Calls For Spyware Trade Ban Amid Pegasus Revelations

Snowden, who in 2013 blew the whistle on the secret mass surveillance programs of the US National Security Agency, described for-profit malware developers as “an industry that should not exist.” He made the comments in an interview with the Guardian after the first revelations from the Pegasus project, a journalistic investigation by a consortium of international media organizations into the NSO Group and its clients. […] Snowden said the consortium’s findings illustrated how commercial malware had made it possible for repressive regimes to place vastly more people under the most invasive types of surveillance. For traditional police operations to plant bugs or wiretap a suspect’s phone, law enforcement would need to “break into somebody’s house, or go to their car, or go to their office, and we’d like to think they’ll probably get a warrant,” he said. But commercial spyware made it cost-efficient for targeted surveillance against vastly more people. “If they can do the same thing from a distance, with little cost and no risk, they begin to do it all the time, against everyone who’s even marginally of interest,” he said. “If you don’t do anything to stop the sale of this technology, it’s not just going to be 50,000 targets. It’s going to be 50 million targets, and it’s going to happen much more quickly than any of us expect.”

Part of the problem arose from the fact that different people’s mobile phones were functionally identical to one another, he said. “When we’re talking about something like an iPhone, they’re all running the same software around the world. So if they find a way to hack one iPhone, they’ve found a way to hack all of them.” He compared companies commercializing vulnerabilities in widely used mobile phone models to an industry of “infectioneers” deliberately trying to develop new strains of disease. “It’s like an industry where the only thing they did was create custom variants of Covid to dodge vaccines,” he said. “Their only products are infection vectors. They’re not security products. They’re not providing any kind of protection, any kind of prophylactic. They don’t make vaccines — the only thing they sell is the virus.”

Snowden said commercial malware such as Pegasus was so powerful that ordinary people could in effect do nothing to stop it. Asked how people could protect themselves, he said: “What can people do to protect themselves from nuclear weapons? “There are certain industries, certain sectors, from which there is no protection, and that’s why we try to limit the proliferation of these technologies. We don’t allow a commercial market in nuclear weapons.” He said the only viable solution to the threat of commercial malware was an international moratorium on its sale. “What the Pegasus project reveals is the NSO Group is really representative of a new malware market, where this is a for-profit business,” he said. “The only reason NSO is doing this is not to save the world, it’s to make money.” He said a global ban on the trade in infection vectors would prevent commercial abuse of vulnerabilities in mobile phones, while still allowing researchers to identify and fix them. “The solution here for ordinary people is to work collectively. This is not a problem that we want to try and solve individually, because it’s you versus a billion dollar company,” he said. “If you want to protect yourself you have to change the game, and the way we do that is by ending this trade.”

Facebook is Ditching Plans To Make an Interface That Reads the Brain

The spring of 2017 may be remembered as the coming-out party for Big Tech’s campaign to get inside your head. That was when news broke of Elon Musk’s new brain-interface company, Neuralink, which is working on how to stitch thousands of electrodes into people’s brains. Days later, Facebook joined the quest when it announced that its secretive skunkworks, named Building 8, was attempting to build a headset or headband that would allow people to send text messages by thinking — tapping them out at 100 words per minute. The company’s goal was a hands-free interface anyone could use in virtual reality. “What if you could type directly from your brain?” asked Regina Dugan, a former DARPA officer who was then head of the Building 8 hardware dvision. “It sounds impossible, but it’s closer than you realize.”

Now the answer is in — and it’s not close at all. Four years after announcing a “crazy amazing” project to build a “silent speech” interface using optical technology to read thoughts, Facebook is shelving the project, saying consumer brain-reading still remains very far off. In a blog post, Facebook said it is discontinuing the project and will instead focus on an experimental wrist controller for virtual reality that reads muscle signals in the arm. “While we still believe in the long-term potential of head-mounted optical [brain-computer interface] technologies, we’ve decided to focus our immediate efforts on a different neural interface approach that has a nearer-term path to market,” the company said. Facebook’s brain-typing project had led it into uncharted territory — including funding brain surgeries at a California hospital and building prototype helmets that could shoot light through the skull — and into tough debates around whether tech companies should access private brain information. Ultimately, though, the company appears to have decided the research simply won’t lead to a product soon enough.