Resources

During COP26, Facebook served ads with climate falsehoods, skepticism

Days after Facebook’s vice president of global affairs, Nick Clegg, touted the company’s efforts to combat climate misinformation in a blog as the Glasgow summit began, conservative media network Newsmax ran an ad on Facebook (FB.O) that called man-made global warming a “hoax.” The ad, which had multiple versions, garnered more than 200,000 views.

In another, conservative commentator Candace Owens said, “apparently we’re just supposed to trust our new authoritarian government” on climate science, while a U.S. libertarian think-tank ran an ad on how “modern doomsayers” had been wrongly predicting climate crises for decades. Newsmax, Owens and the Daily Wire, which paid for the ad from Owens’s page, did not respond to requests for comment. Facebook, which recently changed its name to Meta, does not have a specific policy on climate misinformation in ads or unpaid posts. Alphabet’s Google said last month it would no longer allow ads that contradict scientific consensus on climate change on YouTube and its other services, though it would allow content that discusses false claims.

440

How Facebook and Google fund global misinformation

MIT’s Technology Review shares data from a Facebook-run tool called CrowdTangle. It shows that by 2018 in the nation of Myanmar (population: 53 million), ” All the engagement had instead gone to fake news and clickbait websites.

“In a country where Facebook is synonymous with the internet, the low-grade content overwhelmed other information sources.”
[T]he sheer volume of fake news and clickbait acted like fuel on the flames of already dangerously high ethnic and religious tensions. It shifted public opinion and escalated the conflict, which ultimately led to the death of 10,000 Rohingya, by conservative estimates, and the displacement of 700,000 more. In 2018, a United Nations investigation determined that the violence against the Rohingya constituted a genocide and that Facebook had played a “determining role” in the atrocities. Months later, Facebook admitted it hadn’t done enough “to help prevent our platform from being used to foment division and incite offline violence.” Over the last few weeks, the revelations from the Facebook Papers, a collection of internal documents provided to Congress and a consortium of news organizations by whistleblower Frances Haugen, have reaffirmed what civil society groups have been saying for years: Facebook’s algorithmic amplification of inflammatory content, combined with its failure to prioritize content moderation outside the US and Europe, has fueled the spread of hate speech and misinformation, dangerously destabilizing countries around the world.

But there’s a crucial piece missing from the story. Facebook isn’t just amplifying misinformation.

The company is also funding it.

An MIT Technology Review investigation, based on expert interviews, data analyses, and documents that were not included in the Facebook Papers, has found that Facebook and Google are paying millions of ad dollars to bankroll clickbait actors, fueling the deterioration of information ecosystems around the world.

Facebook pays them for permission to open their content within Facebook’s app (where Facebook controls the advertising) rather than having users clickthrough to the publisher’s own web site, reports Technology Review:
Early on, Facebook performed little quality control on the types of publishers joining the program. The platform’s design also didn’t sufficiently penalize users for posting identical content across Facebook pages — in fact, it rewarded the behavior. Posting the same article on multiple pages could as much as double the number of users who clicked on it and generated ad revenue. Clickbait farms around the world seized on this flaw as a strategy — one they still use today… Clickbait actors cropped up in Myanmar overnight. With the right recipe for producing engaging and evocative content, they could generate thousands of U.S. dollars a month in ad revenue, or 10 times the average monthly salary — paid to them directly by Facebook. An internal company document, first reported by MIT Technology Review in October, shows that Facebook was aware of the problem as early as 2019… At one point, as many as 60% of the domains enrolled in Instant Articles were using the spammy writing tactics employed by clickbait farms, the report said…

75% of users who were exposed to clickbait content from farms run in Macedonia and Kosovo had never followed any of the pages. Facebook’s content-recommendation system had instead pushed it into their news feeds.

Technology Review notes that Facebook now pays billions of dollars to the publishers in their program. It’s a long and detailed article, which ultimately concludes that the problem “is now happening on a global scale.”
Thousands of clickbait operations have sprung up, primarily in countries where Facebook’s payouts provide a larger and steadier source of income than other forms of available work. Some are teams of people while others are individuals, abetted by cheap automated tools that help them create and distribute articles at mass scale…

Google is also culpable. Its AdSense program fueled the Macedonia- and Kosovo-based farms that targeted American audiences in the lead-up to the 2016 presidential election. And it’s AdSense that is incentivizing new clickbait actors on YouTube to post outrageous content and viral misinformation.

Reached for comment, a Facebook spokesperson told Technology Review that they’d misunderstood the issue. And the spokesperson also said “we’ve invested in building new expert-driven and scalable solutions to these complex issues for many years, and will continue doing so.”

Google’s spokesperson confirmed examples in the article violated their own policies and removed the content, adding “We work hard to protect viewers from clickbait or misleading content across our platforms and have invested heavily in systems that are designed to elevate authoritative information.”

440

Uber and Lyft equals more pollution

The efficiencies of ride hailing were supposed to all but end car ownership; instead vehicle sales are on the rise again this year, after a down year in 2020. There is also evidence that Uber and Lyft may actually spur an increase in car sales in cities where they begin operating.

Passengers have largely eschewed pooled rides and public transit in favor of private trips, leading to downtown bottlenecks in cities like San Francisco.

The need for drivers to circle around waiting for or fetching their next passenger, known as deadheading. Deadheading, Lyft and Uber estimated in 2019, is equal to about 40 percent of rideshare miles driven in six American cities. The researchers at Carnegie Mellon estimated that driving without a passenger leads to a roughly 20 percent overall increase in fuel consumption and greenhouse gas emissions compared to trips made by personal vehicles.

The researchers also found that switching from a private car to on-demand rides, like an Uber or Lyft, increased the external costs of a typical trip by 30 to 35 percent, or roughly 35 cents on average, because of the added congestion, collisions and noise from ridesharing services.

407

Wind and Solar provide 10% of Global Electricity while coal emissions jump to new highs

… wind and solar energy combined to generate more than 10 percent of the world’s electricity in 2021 — though coal-fired power plant generation and emissions jumped to new highs in the same period, too

218

How Far Can You Go to Resist Filmers of a Viral Video?

Recently I saw eight seconds of video that capture this problem in its most extreme form. A boy and a girl, who appear to be of high school age, are walking into Panda Express when a third teenager with blond hair stops them in the doorway. He brings with him the energy of the hustler or the man-on-the-street interview host, and the couple are temporarily frozen, caught between suspicion and courtesy. It is a space where things could go either way. “Hey, hold on, excuse me — I have something really important to ask you,” the blond kid says to the girl. “The moment I saw you, my eyes were just — oh, my God, I love you, please could — bleagh!” The “bleagh” is the sound he makes when the other boy punches him in the face…

But perhaps what is most remarkable is the distinct moment of resignation that he and his girlfriend share when they realize what the blond kid is doing. Around the time he gets to “my eyes,” she turns away and steps inside, while Overalls Kid calmly sets his smoothie on the ground in preparation to jack his interlocutor in the mouth. The sound of the impact is meaty. The video ends with both of them stumbling out of the frame, Blond Kid reeling and Overalls Kid winding up for another blow. It’s an efficiently cut bit of action that rewards repeat viewings, but it left me with one question: How do we feel about that punch?

I think we can agree that a punch would not be justified if Blond Kid were professing his love sincerely. But he isn’t. He’s professing his love while an unidentified fourth party records the whole thing, presumably as part of the “hit on another guy’s girlfriend” internet challenge. In this context, he is using other people as props, a bad behavior that society should discourage. But what are we willing to condone in order to discourage it? Our collective culture has just begun to decide how we feel about this kind of activity, which has been invented by new technology and will only become more prevalent in the future.

The article ultimately argues that internet video apps belong to generation Z “the way heroin belongs to junkies. Seen from this perspective, Overalls Kid is part of a history of violent resistance to foreign influence that Americans will recognize in everything from the Boston Tea Party to Al Qaeda to the Ewoks.

“Our reams of fretting essays about how much the kids love phones tend to ignore who gave them phones in the first place.”

385

Facebook Has Trackers in 25% of Websites and 61% of the Most Popular Apps

Megan Borovicka forgot all about her Facebook account after 2013, reports the Washington Post. “But Facebook never forgot about her.”
The 42-year-old Oakland, California, lawyer never picked any “friends,” posted any status updates, liked any photos or even opened the Facebook app on her phone. Yet over the last decade, Facebook has used an invisible data vacuum to suction up very specific details about her life — from her brand of underwear to where she received her paycheck… It isn’t just the Facebook app that’s gobbling up your information. Facebook is so big, it has convinced millions of other businesses, apps and websites to also snoop on its behalf. Even when you’re not actively using Facebook. Even when you’re not online. Even, perhaps, if you’ve never had a Facebook account.

Here’s how it works: Facebook provides its business partners tracking software they embed in apps, websites and loyalty programs. Any business or group that needs to do digital advertising has little choice but to feed your activities into Facebook’s vacuum: your grocer, politicians and, yes, even the paywall page for this newspaper’s website. Behind the scenes, Facebook takes in this data and tries to match it up to your account. It sits under your name in a part of your profile your friends can’t see, but Facebook uses to shape your experience online. Among the 100 most popular smartphone apps, you can find Facebook software in 61 of them, app research firm Sensor Tower told me. Facebook also has trackers in about 25 percent of websites, according to privacy software maker Ghostery…

Facebook got a notice when I opened Hulu to watch TV. Facebook knew when I went shopping for paint, a rocking chair and fancy beans. Facebook learned I read the websites What To Expect, Lullaby Trust and Happiest Baby. Over two weeks, Facebook tracked me on at least 95 different apps, websites and businesses, and those are just the ones I know about. It was as if Facebook had hired a private eye to prepare a dossier about my life. Why does Facebook think that’s okay? The company emailed me answers about how its tracking technology works, but declined my requests to interview its chief privacy officer or other executives about its alleged monopoly….

Who in their right mind thought they were signing up for this much surveillance back when they first joined Facebook?

The article points out that in 2014 Facebook began allowing its advertisers to target users based on websites they’d visited…and now also gathers more data about users from other companies. And “While many companies were using browser cookies, which could be easily cleared or blocked, Facebook tied what it learned to real identities — the names on our Facebook profiles.” And beyond that, companies “can report other identifying information to Facebook like your email to help it figure out who you are… If you’ve never had a Facebook account at all? It may still be watching.”

It’s a lucrative business, the Post points out. “In 2013, the average American’s data was worth about $19 per year in advertising sales to Facebook, according to its financial statements. In 2020, your data was worth $164 per year.”

What does Facebook know about your off-Facebook activity? You can find out at this URL.

If you just want to stop them from giving this information to advertisers, the right side of that page has an option to “Clear History — Disconnect off-Facebook activity history from your account.” But you then have to also click “More Options” and then “Manage Future Activity” to also stop them from later matching up more of your off-Facebook activity to your profile for advertisers.

If you try to select it, Facebook warns what you’ll be missing — that “Keeping your future off-Facebook activity saved with your account allows us to personalize your experience.” And proceeding anyways then generates a popup reminding you that “We’ll still receive activity from the businesses and organizations you visit. It may be used for measurement purposes and to make improvements to our ads systems, but it will be disconnected from your account.”

And apparently your activity on Oculus isn’t covered, and will still remain connected to your Facebook account.

447

10 US Government Agencies Plan Expanded Use of Facial Recognition

The Washington Post reports that the U.S. government “plans to expand its use of facial recognition to pursue criminals and scan for threats, an internal survey has found, even as concerns grow about the technology’s potential for contributing to improper surveillance and false arrests.”
Ten federal agencies — the departments of Agriculture, Commerce, Defense, Homeland Security, Health and Human Services, Interior, Justice, State, Treasury and Veterans Affairs — told the Government Accountability Office they intend to grow their facial recognition capabilities by 2023, the GAO said in a report posted to its website Tuesday. Most of the agencies use face-scanning technology so employees can unlock their phones and laptops or access buildings, though a growing number said they are using the software to track people and investigate crime. The Department of Agriculture, for instance, said it wants to use it to monitor live surveillance feeds at its facilities and send an alert if it spots any faces also found on a watch list…

The GAO said in June that 20 federal agencies have used either internally developed or privately run facial recognition software, even though 13 of those agencies said they did not “have awareness” of which private systems they used and had therefore “not fully assessed the potential risks … to privacy and accuracy.” In the current report, the GAO said several agencies, including the Justice Department, the Air Force and Immigration and Customs Enforcement, reported that they had used facial recognition software from Clearview AI, a firm that has faced lawsuits from privacy groups and legal demands from Google and Facebook after it copied billions of facial images from social media without their approval… Many federal agencies said they used the software by requesting that officials in state and local governments run searches on their own software and report the results. Many searches were routed through a nationwide network of “fusion centers,” which local police and federal investigators use to share information on potential threats or terrorist attacks…

U.S. Customs and Border Protection officials, who have called the technology “the way of the future,” said earlier this month that they had run facial recognition scans on more than 88 million travelers at airports, cruise ports and border crossings. The systems, the officials said, have detected 850 impostors since 2018 — or about 1 in every 103,000 faces scanned.

436

QR codes replace service staff as pandemic spurs automation in US

American workers in manufacturing plants and distribution centres have long worried that their employers would find ways to replace them with robots and artificial intelligence, but the Covid-19 crisis has brought that threat to service workers, too. Businesses are increasingly turning to automated tools for customer service tasks long done by low-wage staff. But rather than robots, it is the ubiquitous QR matrix bar codes that are replacing humans [Editor’s note: the link may be paywalled]. Financial Times:
Many restaurants have begun to experiment with QR codes and order management systems such as Toast that allow diners to order food to their table from their phones instead of with human servers. Grocery stores have increased their investments in self-checkout kiosks that replace human cashiers, and more convenience stores including Circle K are experimenting with the computer vision technology pioneered by Amazon Go to allow customers to make purchases without standing in a checkout line at all. The shifts mean that some of the 1.7m leisure and hospitality jobs and 270,000 retail jobs the US economy has lost since its February 2020 high are unlikely to return.

410

Seemingly Normal Lightning Cable Will Leak Everything You Type

It looks like a Lightning cable, it works like a Lightning cable, and I can use it to connect my keyboard to my Mac. But it is actually a malicious cable that can record everything I type, including passwords, and wirelessly send that data to a hacker who could be more than a mile away. This is the new version of a series of penetration testing tools made by the security researcher known as MG. MG previously demoed an earlier version of the cables for Motherboard at the DEF CON hacking conference in 2019. Shortly after that, MG said he had successfully moved the cables into mass production, and cybersecurity vendor Hak5 started selling the cables. But the more recent cables come in new physical variations, including Lightning to USB-C, and include more capabilities for hackers to play with.

“There were people who said that Type C cables were safe from this type of implant because there isn’t enough space. So, clearly, I had to prove that wrong. :),” MG told Motherboard in an online chat. The OMG Cables, as they’re called, work by creating a Wi-Fi hotspot itself that a hacker can connect to from their own device. From here, an interface in an ordinary web browser lets the hacker start recording keystrokes. The malicious implant itself takes up around half the length of the plastic shell, MG said. MG said that the new cables now have geofencing features, where a user can trigger or block the device’s payloads based on the physical location of the cable. “It pairs well with the self-destruct feature if an OMG Cable leaves the scope of your engagement and you do not want your payloads leaking or being accidentally run against random computers,” he said. “We tested this out in downtown Oakland and were able to trigger payloads at over 1 mile,” he added. He said that the Type C cables allow the same sort of attacks to be carried out against smartphones and tablets. Various other improvements include being able to change keyboard mappings, the ability to forge the identity of specific USB devices, such as pretending to be a device that leverages a particular vulnerability on a system.

444

Weapon That “Stops You From Talking”

The U.S. Navy has successfully invented a special electronic device that is designed to stop people from talking. Interesting Engineering reports:
A form of non-lethal weapon, the new electronic device effectively repeats a speaker’s own voice back at them, and only them, while they attempt to talk. It was developed, and patented back in 2019 but has only recently been discovered, according to a report by the New Scientist. The main idea of the weapon is to disorientate a target so much that they will be unable to communicate effectively with other people.

Called acoustic hailing and disruption (AHAD), the weapon is able to record speech and instantly broadcast it at a target in milliseconds. Much like an annoying sibling, this action will disrupt the target’s concentration, and, in theory, discourage them from continuing to speak. As for the technical details of the device, a quick review of its patent is very interesting indeed. “According to an illustrative embodiment of the present disclosure, a target’s speech is directed back to them twice, once immediately and once after a short delay. This delay creates delayed auditory feedback (DAF), which alters the speaker’s normal perception of their own voice. In normal speech, a speaker hears their own words with a slight delay, and the body is accustomed to this feedback. By introducing another audio feedback source with a sufficiently long delay, the speaker’s concentration is disrupted and it becomes difficult to continue speaking.”

381

Investigation Reveals Widespread Cellphone Surveillance of the Innocent

Cellphones “can be transformed into surveillance devices,” writes the Guardian, reporting startling new details about which innocent people are still being surveilled (as part of a collaborative reporting project with 16 other media outlets led by the French nonprofit Forbidden Stories).

Long-time Slashdot reader shanen shared the newspaper’s critique of a “privatised government surveillance industry” that’s made NSO a billion-dollar company, thanks to its phone-penetrating spy software Pegaus:
[NSO] insists only carefully vetted government intelligence and law enforcement agencies can use Pegasus, and only to penetrate the phones of “legitimate criminal or terror group targets”. Yet in the coming days the Guardian will be revealing the identities of many innocent people who have been identified as candidates for possible surveillance by NSO clients in a massive leak of data… The presence of their names on this list indicates the lengths to which governments may go to spy on critics, rivals and opponents.

First we reveal how journalists across the world were selected as potential targets by these clients prior to a possible hack using NSO surveillance tools. Over the coming week we will be revealing the identities of more people whose phone numbers appear in the leak. They include lawyers, human rights defenders, religious figures, academics, businesspeople, diplomats, senior government officials and heads of state. Our reporting is rooted in the public interest. We believe the public should know that NSO’s technology is being abused by the governments who license and operate its spyware.

But we also believe it is in the public interest to reveal how governments look to spy on their citizens and how seemingly benign processes such as HLR lookups [which track the general locations of cellphone users] can be exploited in this environment.

It is not possible to know without forensic analysis whether the phone of someone whose number appears in the data was actually targeted by a government or whether it was successfully hacked with NSO’s spyware. But when our technical partner, Amnesty International’s Security Lab, conducted forensic analysis on dozens of iPhones that belonged to potential targets at the time they were selected, they found evidence of Pegasus activity in more than half.

The investigators say that potential targets included nearly 200 journalists around the world, including numerous reporters from CNN, the Associated Press, Voice of America, the New York Times, the Wall Street Journal, Bloomberg News, Le Monde in France, and even the editor of the Financial Times.

In addition, the investigators say they found evidence the Pegasus software had been installed on the phone of the fiancée of murdered Saudi journalist Jamal Khashoggi. NSO denies this to the Washington Post. But they also insist that they’re simply licensing their software to clients, and their company “has no insight” into those clients’ specific intelligence activities.

The Washington Post reports that Amnesty’s Security Lab found evidence of Pegasus attacks on 37 of 67 smartphones from the list which they tested. But beyond that “for the remaining 30, the tests were inconclusive, in several cases because the phones had been replaced. Fifteen of the phones were Android devices, none of which showed evidence of successful infection. However, unlike iPhones, Androids do not log the kinds of information required for Amnesty’s detective work.”

Familiar privacy measures like strong passwords and encryption offer little help against Pegasus, which can attack phones without any warning to users. It can read anything on a device that a user can, while also stealing photos, recordings, location records, communications, passwords, call logs and social media posts. Spyware also can activate cameras and microphones for real-time surveillance.

431

Edward Snowden Calls For Spyware Trade Ban Amid Pegasus Revelations

Snowden, who in 2013 blew the whistle on the secret mass surveillance programs of the US National Security Agency, described for-profit malware developers as “an industry that should not exist.” He made the comments in an interview with the Guardian after the first revelations from the Pegasus project, a journalistic investigation by a consortium of international media organizations into the NSO Group and its clients. […] Snowden said the consortium’s findings illustrated how commercial malware had made it possible for repressive regimes to place vastly more people under the most invasive types of surveillance. For traditional police operations to plant bugs or wiretap a suspect’s phone, law enforcement would need to “break into somebody’s house, or go to their car, or go to their office, and we’d like to think they’ll probably get a warrant,” he said. But commercial spyware made it cost-efficient for targeted surveillance against vastly more people. “If they can do the same thing from a distance, with little cost and no risk, they begin to do it all the time, against everyone who’s even marginally of interest,” he said. “If you don’t do anything to stop the sale of this technology, it’s not just going to be 50,000 targets. It’s going to be 50 million targets, and it’s going to happen much more quickly than any of us expect.”

Part of the problem arose from the fact that different people’s mobile phones were functionally identical to one another, he said. “When we’re talking about something like an iPhone, they’re all running the same software around the world. So if they find a way to hack one iPhone, they’ve found a way to hack all of them.” He compared companies commercializing vulnerabilities in widely used mobile phone models to an industry of “infectioneers” deliberately trying to develop new strains of disease. “It’s like an industry where the only thing they did was create custom variants of Covid to dodge vaccines,” he said. “Their only products are infection vectors. They’re not security products. They’re not providing any kind of protection, any kind of prophylactic. They don’t make vaccines — the only thing they sell is the virus.”

Snowden said commercial malware such as Pegasus was so powerful that ordinary people could in effect do nothing to stop it. Asked how people could protect themselves, he said: “What can people do to protect themselves from nuclear weapons? “There are certain industries, certain sectors, from which there is no protection, and that’s why we try to limit the proliferation of these technologies. We don’t allow a commercial market in nuclear weapons.” He said the only viable solution to the threat of commercial malware was an international moratorium on its sale. “What the Pegasus project reveals is the NSO Group is really representative of a new malware market, where this is a for-profit business,” he said. “The only reason NSO is doing this is not to save the world, it’s to make money.” He said a global ban on the trade in infection vectors would prevent commercial abuse of vulnerabilities in mobile phones, while still allowing researchers to identify and fix them. “The solution here for ordinary people is to work collectively. This is not a problem that we want to try and solve individually, because it’s you versus a billion dollar company,” he said. “If you want to protect yourself you have to change the game, and the way we do that is by ending this trade.”

459

Facebook is Ditching Plans To Make an Interface That Reads the Brain

The spring of 2017 may be remembered as the coming-out party for Big Tech’s campaign to get inside your head. That was when news broke of Elon Musk’s new brain-interface company, Neuralink, which is working on how to stitch thousands of electrodes into people’s brains. Days later, Facebook joined the quest when it announced that its secretive skunkworks, named Building 8, was attempting to build a headset or headband that would allow people to send text messages by thinking — tapping them out at 100 words per minute. The company’s goal was a hands-free interface anyone could use in virtual reality. “What if you could type directly from your brain?” asked Regina Dugan, a former DARPA officer who was then head of the Building 8 hardware dvision. “It sounds impossible, but it’s closer than you realize.”

Now the answer is in — and it’s not close at all. Four years after announcing a “crazy amazing” project to build a “silent speech” interface using optical technology to read thoughts, Facebook is shelving the project, saying consumer brain-reading still remains very far off. In a blog post, Facebook said it is discontinuing the project and will instead focus on an experimental wrist controller for virtual reality that reads muscle signals in the arm. “While we still believe in the long-term potential of head-mounted optical [brain-computer interface] technologies, we’ve decided to focus our immediate efforts on a different neural interface approach that has a nearer-term path to market,” the company said. Facebook’s brain-typing project had led it into uncharted territory — including funding brain surgeries at a California hospital and building prototype helmets that could shoot light through the skull — and into tough debates around whether tech companies should access private brain information. Ultimately, though, the company appears to have decided the research simply won’t lead to a product soon enough.

438

Man Wrongfully Arrested By Facial Recognition Tells Congress His Story

Michigan resident Robert Williams testified about being wrongfully arrested by Detroit Police in an effort to urge Congress to pass legislation against the use of facial recognition technology. Williams’ testimony was part of a hearing held by the House of Representatives’ subcommittee on crime, terrorism, and homeland security, which dealt with how law enforcement uses the highly controversial surveillance technology. Congress recently introduced the Facial Recognition and Biometric Technology Moratorium, which would indefinitely ban its use by law enforcement. Williams was wrongfully arrested in 2020 for federal larceny after he was misidentified by the Detroit Police Department’s facial recognition software after they used a grainy image from the surveillance footage. He was then picked from a photo lineup by the store security guard who wasn’t actually present for the incident. According to his testimony, Williams was detained for thirty hours and was not given any food or water. […]

Research has repeatedly shown that facial recognition technology is fundamentally biased against women and people of color, leading to errors like this. Even when working properly, privacy advocates have argued facial recognition systems disproportionately target communities of color, creating further pretext for police intervention. […] “Large scale adoption of this technology would inject further inequity into a system at a time when we should be moving to make the criminal justice system more equitable,” Representative Sheila Jackson Lee (TX-18) said during the hearing. The subcommittee also referenced a recent study from the U.S. Government Accountability Office that reported that 20 federal agencies used facial recognition software last year. Six federal agencies, including the FBI and the U.S. Postal Service, reported using it during the 2020 Black Lives Matter protests that followed the police murder of George Floyd.

Robert Williams is just one of many people impacted by this technology’s errors and biases. Williams is now represented by the ACLU and is suing the Detroit Police Department for damages and policy changes to prohibit the use of facial recognition technology. So far, the technology has been banned statewide in Vermont and Virginia, as well as in 20 cities across the US. “Mr. Williams deserved better from the law enforcement agencies entrusted to use a technology that we all know is less accurate when applied to citizens who look like him,” House Judiciary Committee Chairman Jerrold Nadler (D-NY) said in his testimony.

416

Handwriting Is Better Than Typing When Learning a New Language, Study Finds

Researchers tasked 42 adult volunteers with learning the Arabic alphabet from scratch: some through writing it out on paper, some through typing it out on a keyboard, and some through watching and responding to video instructions. Those in the handwriting group not only learned the unfamiliar letters more quickly, but they were also better able to apply their new knowledge in other areas — by using the letters to make new words and to recognize words they hadn’t seen before, for example. While writing, typing, and visual learning were effective at teaching participants to recognize Arabic letters — learners made very few mistakes after six exercise sessions — on average, the writing group needed fewer sessions to get to a good standard.

Researchers then tested the groups to see how the learning could be generalized. In every follow-up test, using skills they hadn’t been trained on, the writing group performed the best: naming letters, writing letters, spelling words, and reading words. The research shows that the benefits of teaching through handwriting go beyond better penmanship: There are also advantages in other areas of language learning. It seems as though the knowledge gets more firmly embedded through writing.

410

Amazon Wants To Monitor You In Your Sleep, For Your Benefit

The Federal Communications Commission on Friday granted Amazon approval to use a radar sensor to sense motion and “enable contactless sleep tracing functionalities.” Amazon on June 22 asked the FCC, which regulates airwave uses, for permission to market a device that uses radar. The technology captures movement in three dimensions, enabling a user to control its features through simple gestures and movements, the company said in a filing. The capability, according to Amazon, could help people with “with mobility, speech, or tactile impairments,” and it could monitor sleep with a high degree of precision.

“The use of Radar Sensors in sleep tracking could improve awareness and management of sleep hygiene, which in turn could produce significant health benefits for many Americans,” Amazon said in its filing. “Radar Sensors will allow consumers to recognize potential sleep issues.” Amazon didn’t immediately respond to a request for comment. The company didn’t fully describe the device in its filing, but did say it would not be a mobile device. The FCC earlier granted similar permission to Google for radar to enable touchless control of Pixel smartphones, the agency said in its letter approving Amazon’s request.

480

Uber and Lyft Can’t Find Drivers Because Gig Work Sucks

You may have noticed recently that an Uber ride is more expensive than it used to be. As ride-hail companies Uber and Lyft hike prices to record heights during the COVID-19 pandemic, much commentary has settled on explaining this as a consequence of a “labor shortage” largely motivated by a lack of proper financial incentives. Drivers, the story goes, saw the new cash bonuses offered by companies to lure workers back as insufficient. Some, perhaps, decided they were not worth the risk of getting infected with COVID-19 or one of its budding variants, while other analyses suggested drivers were content with living on stimulus funds rather than money from driving. At the same time, the firms began curtailing subsidies that kept prices low enough to attract riders and work towards monopoly. Together, this has left us with a sudden and massive spike in ride-hail prices; Gridwise, a ride-hail driver assistance app, estimated that Uber has increased its prices by 79 percent since the second quarter of 2019.

While Uber and Lyft are reportedly thinking about offering new perks such as education, career, and expense programs, analysts admit these don’t strike at core problems with the gig economy that were driving workers away before COVID-19 hit and are making it difficult to attract them now. In conversations with Motherboard, former and current ride-hail drivers pointed to a major factor for not returning: how horrible it is to work for Uber and Lyft. For some workers, this realization came long before the pandemic reared its head, and for others, the crisis hammered it home. Motherboard has changed some drivers’ names or granted them anonymity out of their fear of retaliation.
“If I kept driving, something was going to break,” said Maurice, a former driver in New York who spent four years working for Uber and Lyft before the pandemic. “I already go nights without eating or sleeping. My back hurt, my joints hurt, my neck hurt, I felt like a donkey. Like a slave driving all the time.”

“I’ve been driving for six years. Uber has taken at least 10,000 pounds in commission from me each year! They take 20 percent of my earnings, then offer me 200 pounds,” Ramana Prai, a London-based Uber driver, told Motherboard. “I don’t understand how they can take 60,000 pounds from me, then offer nothing when I’m in need. How can I provide for my partner and two kids with this? My employer has let me down.”

“I woke up every day asking how long I could keep it up, I just didn’t feel like a person,” Yona, who worked for Lyft in California for the past six years until the pandemic, told Motherboard. “I got two kids, my mother, my sister, I couldn’t see them. And I was doing all this for them but I could barely support them, barely supported myself.”

“I was making even less than my sister and I was probably less safe too,” Yona’s sister, Destiny, told Motherboard. “She got out back in the spring, I hopped on and was coming back negative some days. I tried UberEats and DoorDash to see if that was any better, but stopped after a friend was almost robbed on a delivery. Okay, so the options are get covid or get robbed, then guess what: I’m doing none of them.”

Motherboard argues that the degrading working conditions, as well as the poor pay, “are structurally necessary for ride-hail companies. They were necessary to attract and retain customers with artificially low prices, to burn through drivers at high rates that frustrate labor organizing, and bolster the narrative of gig work as temporary, transient, and convenient. It’s no wonder, then, that drivers aren’t coming back.”

444

Chinese Facial Recognition Company With Silicon Valley Funding

DeepGlint, also known as Beijing Geling Shentong Information Technology Co., Ltd., is a facial recognition company with deep ties to Chinese police surveillance, and funding from US-based Sequoia Capital. Today the Commerce Department added it to its Entity List, which restricts US companies from doing business with listed firms without a special license. Sequoia did not immediately respond to a request for comment. DeepGlint co-founded a facial recognition lab in 2018 with Chinese authorities in Urumqi, the capital of Xinjiang, according to the South China Morning Post. It has also gained international bragging rights through the US National Institute of Standards and Technology’s (NIST) Face Recognition Vendor Test. DeepGlint claimed top accuracy in the test as of January 2021, giving it a potent marketing tool in the security and surveillance industry. While DeepGlint has been accepted for a public offering on Shanghai’s STAR stock exchange, the firm hasn’t seen the commercial success of other AI startups in the country, explained Jeffrey Ding in his ChinAI newsletter last month. Since the firm is so heavily invested in government work, it has to follow slow government procurement cycles and is unlikely to score huge infrastructure projects, Ding writes.

507

Volkswagen, BMW Fined $1 Billion For Colluding To Make Dirtier Cars

Volkswagen, Audi, Porsche, BMW, and Mercedes-Benz parent company Daimler spent years illegally colluding to slow the deployment of cleaner emissions technology, says the European Union, which is dishing out fines as a result. From a report:
The EU’s executive branch hit the Volkswagen Group (which owns Audi and Porsche) and BMW with a collective $1 billion fine on Thursday for their role in the scheme. Volkswagen Group must pay $595 million, while BMW will pay $442 million. Daimler, however, evaded a $861 million fine of its own because the automaker revealed the collusion to the regulators.

The scheme described by EU authorities is separate from the Volkswagen Group’s massive Dieselgate scandal, in which the company installed software on its diesel vehicles that helped fool environmental regulators into believing they were compliant, when in reality, they were polluting far more than the legal limit. Dieselgate ultimately led to nearly $40 billion in fines, buybacks, and legal fees for the Volkswagen Group. Daimler also installed software on some of its diesel vehicles to cheat emissions tests and has paid billions of dollars in fines. BMW was careful to point out Thursday that, unlike the other companies it was caught colluding with, it had not cheated emissions testing.

432

Despite the Hype, iPhone Security No Match For NSO Spyware

The text delivered last month to the iPhone 11 of Claude Mangin, the French wife of a political activist jailed in Morocco, made no sound. It produced no image. It offered no warning of any kind as an iMessage from somebody she didn’t know delivered malware directly onto her phone — and past Apple’s security systems. Once inside, the spyware, produced by Israel’s NSO Group and licensed to one of its government clients, went to work, according to a forensic examination of her device by Amnesty International’s Security Lab. It found that between October and June, her phone was hacked multiple times with Pegasus, NSO’s signature surveillance tool, during a time when she was in France. The examination was unable to reveal what was collected. But the potential was vast: Pegasus can collect emails, call records, social media posts, user passwords, contact lists, pictures, videos, sound recordings and browsing histories, according to security researchers and NSO marketing materials.

The spyware can activate cameras or microphones to capture fresh images and recordings. It can listen to calls and voice mails. It can collect location logs of where a user has been and also determine where that user is now, along with data indicating whether the person is stationary or, if moving, in which direction. And all of this can happen without a user even touching her phone or knowing she has received a mysterious message from an unfamiliar person — in Mangin’s case, a Gmail user going by the name “linakeller2203.” These kinds of “zero-click” attacks, as they are called within the surveillance industry, can work on even the newest generations of iPhones, after years of effort in which Apple attempted to close the door against unauthorized surveillance — and built marketing campaigns on assertions that it offers better privacy and security than rivals.

[…] Researchers have documented iPhone infections with Pegasus dozens of times in recent years, challenging Apple’s reputation for superior security when compared with its leading rivals, which run Android operating systems by Google. The months-long investigation by The Post and its partners found more evidence to fuel that debate. Amnesty’s Security Lab examined 67 smartphones whose numbers were on the Forbidden Stories list and found forensic evidence of Pegasus infections or attempts at infections in 37. Of those, 34 were iPhones — 23 that showed signs of a successful Pegasus infection and 11 that showed signs of attempted infection.

461