Resources

Facebook Knows It Encourages Division. Top Executives Nixed Solutions.

A Facebook team had a blunt message for senior executives. The company’s algorithms weren’t bringing people together. They were driving people apart. “Our algorithms exploit the human brain’s attraction to divisiveness,” read a slide from a 2018 presentation. “If left unchecked,” it warned, Facebook would feed users “more and more divisive content in an effort to gain user attention & increase time on the platform.” That presentation went to the heart of a question dogging Facebook almost since its founding: Does its platform aggravate polarization and tribal behavior? The answer it found, in some cases, was yes.

Facebook had kicked off an internal effort to understand how its platform shaped user behavior and how the company might address potential harms. Chief Executive Mark Zuckerberg had in public and private expressed concern about “sensationalism and polarization.” But in the end, Facebook’s interest was fleeting. Mr. Zuckerberg and other senior executives largely shelved the basic research, according to previously unreported internal documents and people familiar with the effort, and weakened or blocked efforts to apply its conclusions to Facebook products. Facebook policy chief Joel Kaplan, who played a central role in vetting proposed changes, argued at the time that efforts to make conversations on the platform more civil were “paternalistic,” said people familiar with his comments.

How Google Ruined the Internet

Remember that story about the Polish dentist who pulled out all of her ex-boyfriend’s teeth in an act of revenge? It was complete and utter bullshit. 100% fabricated. No one knows who wrote it. Nevertheless, it was picked up by Fox News, the Los Angeles Times and many other publishers. That was eight years ago, yet when I search now for “dentist pulled ex boyfriends teeth,” I get a featured snippet that quotes ABC News’ original, uncorrected story. Who invented the fidget spinner? Ask Google Assistant and it will tell you that Catherine Hettinger did: a conclusion based on poorly-reported stories from The Guardian, The New York Times and other major news outlets. Bloomberg’s Joshua Brustein clearly demonstrated that Ms. Hettinger did not invent the low friction toy. Nevertheless, ask Google Assistant “who really invented the fidget spinner?” and you’ll get the same answer: Catherine Hettinger.

In 1998, the velocity of information was slow and the cost of publishing it was high (even on the web). Google leveraged those realities to make the best information retrieval system in the world. Today, information is free, plentiful and fast moving; somewhat by design, Google has become a card catalog that is constantly being reordered by an angry, misinformed mob. The web was supposed to forcefully challenge our opinions and push back, like a personal trainer who doesn’t care how tired you say you are. Instead, Google has become like the pampering robots in WALL-E, giving us what we want at the expense of what we need. But, it’s not our bodies that are turning into mush: It’s our minds.

Ad Firms Are Exploring Deepfaked Commercials

“With the pandemic having shut down production, companies are asking ad agencies to create commercials made up of digitally altered footage,” reports the New York Times, citing a State Farm commercial that aired during an ESPN documentary starring the anchor of “SportsCenter,” Kenny Mayne:

The producers made the commercial by layering video of Mr. Mayne’s 60-year-old mouth onto footage of his 38-year-old face. To many viewers, the stunt provided a welcome moment of levity in depressing times. Others were made uneasy by the smoothness of the patch, describing it as a type of deepfake. “We tried to make the joke clear enough so that we weren’t tricking anyone,” said Carrie Brzezinski-Hsu, the head of ESPN CreativeWorks, which created the commercial with the ad agencies Optimum Sports and Translation.

Ms. Brzezinski-Hsu said manipulated footage was likely to appear in future ESPN ads. And executives at several major advertising agencies said they had discussed making similar commercials with their clients in recent weeks. “We’re so restricted in how we can generate content,” said Kerry Hill, the production director for the ad agency FCB in North America. “Anything that can be computer generated is something we’re going to explore.”

Husani Oakley, the chief technology officer of the ad firm Deutsch, said digitally altered ads should somehow clue viewers into the fact that what they are seeing is not completely real. “The technology is here, and it’s only going to get better and better, and we have to get used to it,” he added. “We’re exploring ways to have fun with it.”

As Coronavirus Surveillance Escalates, Personal Privacy Plummets

Tracking entire populations to combat the pandemic now could open the doors to more invasive forms of government snooping later.

In South Korea, government agencies are harnessing surveillance-camera footage, smartphone location data and credit card purchase records to help trace the recent movements of coronavirus patients and establish virus transmission chains. In Lombardy, Italy, the authorities are analyzing location data transmitted by citizens’ mobile phones to determine how many people are obeying a government lockdown order and the typical distances they move every day. About 40 percent are moving around “too much,” an official recently said. In Israel, the country’s internal security agency is poised to start using a cache of mobile phone location data — originally intended for counterterrorism operations — to try to pinpoint citizens who may have been exposed to the virus.

As countries around the world race to contain the pandemic, many are deploying digital surveillance tools as a means to exert social control, even turning security agency technologies on their own civilians. Health and law enforcement authorities are understandably eager to employ every tool at their disposal to try to hinder the virus — even as the surveillance efforts threaten to alter the precarious balance between public safety and personal privacy on a global scale. Yet ratcheting up surveillance to combat the pandemic now could permanently open the doors to more invasive forms of snooping later. It is a lesson Americans learned after the terrorist attacks of Sept. 11, 2001, civil liberties experts say. Nearly two decades later, law enforcement agencies have access to higher-powered surveillance systems, like fine-grained location tracking and facial recognition — technologies that may be repurposed to further political agendas like anti-immigration policies. Civil liberties experts warn that the public has little recourse to challenge these digital exercises of state power.

In Fast-Moving Pandemic, Sources of Falsehoods Spread by Text, Email, WhatsApp and TikTok

Misleading text messages claiming that President Trump was going to announce a national quarantine buzzed into cellphones across the country over the weekend, underscoring how rapidly false claims are spreading — and how often it is happening beyond the familiar misinformation vehicles of Facebook and Twitter. The false texts spread so widely that on Sunday night the White House’s National Security Council, fearing the texts were an attempt to spook the stock market as it opened Monday, decided to directly debunk the misleading claims in a Twitter post: “Text message rumors of a national #quarantine are FAKE. There is no national lockdown.” But by then the messages already had spread widely, as had similar ones both in the United States and Europe in recent days. Text messages, encrypted communication apps such as WhatsApp and some social media platforms have carried similarly alarming misinformation, much of it with the apparent goal of spurring people to overrun stores to buy basic items ahead of a new wave of government restrictions.

The one claiming that Trump was going to impose a national quarantine included the advice: “Stock up on whatever you guys need to make sure you have a two week supply of everything. Please forward to your network.” In fact, authorities have warned against aggressive buying that could disrupt supply chains and fuel panic. Trump addressed the misleading text messages at an afternoon news conference Monday, saying, “It could be that you have some foreign groups that are playing games.” On the possibility of a national quarantine, Trump said: “We haven’t determined to do that at all. … Hopefully we won’t have to.”

Surveillance Company Banjo Used a Secret Company and Fake Apps To Scrape Social Media

Banjo, an artificial intelligence firm that works with police used a shadow company to create an array of Android and iOS apps that looked innocuous but were specifically designed to secretly scrape social media. The news signifies an abuse of data by a government contractor, with Banjo going far beyond what companies which scrape social networks usually do. Banjo created a secret company named Pink Unicorn Labs, according to three former Banjo employees, with two of them adding that the company developed the apps. This was done to avoid detection by social networks, two of the former employees said.

Three of the apps created by Pink Unicorn Labs were called “One Direction Fan App,” “EDM Fan App,” and “Formula Racing App.” Motherboard found these three apps on archive sites and downloaded and analyzed them, as did an independent expert. The apps — which appear to have been originally compiled in 2015 and were on the Play Store until 2016 according to Google — outwardly had no connection to Banjo, but an analysis of its code indicates connections to the company. This aspect of Banjo’s operation has some similarities with the Cambridge Analytica scandal, with multiple sources comparing the two incidents. […] The company has not publicly explained how it specifically scrapes social media apps. Motherboard found the apps developed by Pink Unicorn Labs included code mentioning signing into Facebook, Twitter, Instagram, Russian social media app VK, FourSquare, Google Plus, and Chinese social network Sina Weibo.
The apps could have scraped social media “by sending the saved login token to a server for Banjo to use later, or by using the app itself to scrape information,” reports Motherboard, noting that it’s not entirely clear which method Banjo used. “Motherboard found that the apps when opened made web requests to the domain ‘pulapi.com,’ likely referring to Pink Unicorn Labs, but the site that would provide a response to the app is currently down.”

Last weekend, Motherboard reported that Banjo signed a $20.7 million contract with Utah in 2019 that granted the company access to the state’s traffic, CCTV, and public safety cameras. “Banjo promises to combine that input with a range of other data such as satellites and social media posts to create a system that it claims alerts law enforcement of crimes or events in real-time.”

In Coronavirus Fight, China Gives Citizens a Color Code, With Red Flags

A new system uses software to dictate quarantines — and appears to send personal data to police, in a troubling precedent for automated social control. As China encourages people to return to work despite the coronavirus outbreak, it has begun a bold mass experiment in using data to regulate citizens’ lives — by requiring them to use software on their smartphones that dictates whether they should be quarantined or allowed into subways, malls and other public spaces. But a New York Times analysis of the software’s code found that the system does more than decide in real time whether someone poses a contagion risk. It also appears to share information with the police, setting a template for new forms of automated social control that could persist long after the epidemic subsides.

The Alipay Health Code, as China’s official news media has called the system, was first introduced in the eastern city of Hangzhou — a project by the local government with the help of Ant Financial, a sister company of the e-commerce giant Alibaba. People in China sign up through Ant’s popular wallet app, Alipay, and are assigned a color code — green, yellow or red — that indicates their health status. The system is already in use in 200 cities and is being rolled out nationwide, Ant says. Neither the company nor Chinese officials have explained in detail how the system classifies people. That has caused fear and bewilderment among those who are ordered to isolate themselves and have no idea why.

Airbnb ‘Surveillance Bugs To Make Sure Guests Behave’

“So this is creepy,” writes a Forbes cybersecurity reporter, saying Airbnb “has put aside the stories of hosts secretly spying on guests” to promote a new line of devices Forbes calls “surveillance bugs to make sure guests behave.”

“… we’re hurtling toward a world where almost everything we own is monitoring us in some way, and I’m not sure that’s actually going to be a safer world.”

First Use of Deepfakes In an Indian Election Campaign

The Delhi Bharatiya Janata Party (BJP) has partnered with political communications firm The Ideaz Factory to create “positive campaigns” using deepfakes to reach different linguistic voter bases, reports Nilesh Christopher reports via Motherboard. It marks the debut of deepfakes in election campaigns in India.

On February 7, a day ahead of the Legislative Assembly elections in Delhi, two videos of the Bharatiya Janata Party (BJP) President Manoj Tiwari criticizing the incumbent Delhi government of Arvind Kejriwal went viral on WhatsApp. While one video had Tiwari speak in English, the other was him speaking in the Hindi dialect of Haryanvi. “[Kejriwal] cheated us on the basis of promises. But now Delhi has a chance to change it all. Press the lotus button on February 8 to form the Modi-led government,” he said. One may think that this 44-second monologue might be a part of standard political outreach, but there is one thing that’s not standard: These videos were not real. [The original video can be viewed here.]

“Deepfake technology has helped us scale campaign efforts like never before,” Neelkant Bakshi, co-incharge of social media and IT for BJP Delhi, tells VICE. “The Haryanvi videos let us convincingly approach the target audience even if the candidate didn’t speak the language of the voter.” Tiwari’s fabricated video was used widely to dissuade the large Haryanvi-speaking migrant worker population in Delhi from voting for the rival political party. According to Bakshi, these deepfakes were distributed across 5,800 WhatsApp groups in the Delhi and NCR region, reaching approximately 15 million people.

‘These People Are Evil’: Drivers Speak Out Against Uber’s New Coronavirus Sick Leave Fund

Countless Uber drivers are now being pushed to the front lines of the coronavirus pandemic, transporting humans, food, supplies, and maybe soon Covid-19 testing kits as shelter-in-place rules cause demand for delivery services to spike. Yet despite their exposure to infection, gig workers lack paid sick leave, health benefits, or unemployment insurance because of their status as independent contractors.

Earlier this month, Uber, Lyft, and Amazon drivers protested the exclusion of gig workers from Silicon Valley’s monumental heave to protect itself from the coronavirus. As technology employees go remote, contractors are burdened with extra demands and no additional support. Uber, Lyft, and Amazon eventually agreed to compensate gig workers through ad hoc funds, but OneZero spoke to Uber drivers who say this is hardly a safety net. “I think I’m going to fall through the cracks,” said Kimberly James, a 46-year-old driver for Uber Eats in Atlanta, Georgia. After a series of devastating hardships, including losing her house in a fire, James has come to rely on food delivery platforms like Uber Eats and DoorDash to survive.

In 2012, James was diagnosed with an autoimmune disorder, and her weekly income of $400 means she cannot afford to get sick. Health officials have warned that the coronavirus is especially dangerous for immunocompromised people, so today James has no choice but to isolate indoors. One-time payouts are based on a person’s average daily earnings for the past six months. Someone making $28.57 per day is eligible for a payment of $400, the equivalent of 14 days of average pay, while someone earning $121.42 per day can receive $1,700, Uber says on its website. To qualify, drivers must have completed one trip in the 30 days before March 6, 2020, when the global program was first announced.

Why Don’t We Just Ban Targeted Advertising?

Google and Facebook, including their subsidiaries like Instagram and YouTube, make about 83 percent and 99 percent of their respective revenue from one thing: selling ads. It’s the same story with Twitter and other free sites and apps. More to the point, these companies are in the business of what’s called behavioral advertising, which allows companies to aim their marketing based on everything from users’ sexual orientations to their moods and menstrual cycles, as revealed by everything they do on their devices and every place they take them. It follows that most of the unsavory things the platforms do—boost inflammatory content, track our whereabouts, enable election manipulation, crush the news industry—stem from the goal of boosting ad revenues. Instead of trying to clean up all these messes one by one, the logic goes, why not just remove the underlying financial incentive? Targeting ads based on individual user data didn’t even really exist until the past decade. (Indeed, Google still makes many billions of dollars from ads tied to search terms, which aren’t user-specific.) What if companies simply weren’t allowed to do it anymore?

Let’s pretend it really happened. Imagine Congress passed a law tomorrow morning that banned companies from doing any ad microtargeting whatsoever. Close your eyes and picture what life would be like if the leading business model of the internet were banished from existence. How would things be different?

Many of the changes would be subtle. You could buy a pair of shoes on Amazon without Reebok ads following you for months. Perhaps you’d see some listings that you didn’t see before, for jobs or real estate. That’s especially likely if you’re African-American, or a woman, or a member of another disadvantaged group. You might come to understand that microtargeting had supercharged advertisers’ ability to discriminate, even when they weren’t trying to.

Gig Workers for Target’s Delivery App Hate Their Algorithmically-Determined Pay

In 2017 Target bought a same-day home-delivery company called Shipt for $550 million. Shipt now services half of Target’s stores, reports Motherboard, and employs more than 100,000 gig workers.

Unfortunately, they’re working for a company that “has a track record of censoring and retaliating against workers for asking basic questions about their working conditions or expressing dissent,” reports Motherboard. For example, an hour after tweeting about how there was now much more competition for assignments, one Seattle gig worker found their account suddenly “deactivated” — the equivalent of being fired — and also received an email saying they were no longer “eligible to reapply”.

“They stamp out resistance by flooding the market with new workers…” complained one Shipt worker, “and they’re actively monitoring all the social media groups. ”
On its official national Facebook group, known as the Shipt Shopper Lounge, which has more than 100,000 members, Shipt moderators selected by the company frequently censor and remove posts, turn off comments sections, and ban workers who speak out about their working conditions, according to screenshots, interviews, and other documentation provided to Motherboard. The same is true on local Facebook groups, which Shipt also monitors closely, according to workers. Motherboard spoke to seven current Shipt workers, each of whom described a culture of retaliation, fear, and censorship online…

Because Shipt classifies its workers as contractors, not employees, workers pay for all of their expenses — including gas, wear and tear on their cars, and accidents — out of pocket. They say the tips on large orders from Target, sometimes with hundreds of items, can be meager. Workers say Shipt customers often live in gated and upscale communities and that the app encourages workers to tack on gifts like thank you cards, hot cocoa, flowers, and balloons onto orders (paid for out of their own pocket) and to offer to walk customer’s dogs and take out their trash, as a courtesy. Shipt calls this kind of service “Bringing the Magic,” which can improve workers’ ratings from customers that factor into the algorithm that determines who gets offered the most lucrative orders…

Unfortunately, that new algorithm (which began rolling out last year) is opaque to the workers affected by it — though Gizmodo reported pay appears to be at least 28% lower. And Motherboard heard even higher estimates:
“Our best estimate is that payouts are now 30 percent less, and up to 50 percent on orders,” one Shipt worker in Kalamazoo with two years under her belt, who wished to remain anonymous for fear of retaliation, told Motherboard. “I fluctuate between extreme anger and despair. It’s been three weeks since this has been implemented, and one of my good friends told me that she’s down the equivalent of a car payment.”

Another Shipt worker in Palm Springs, California provided Motherboard with receipts for a 181-item order that included six Snapple cases, five La Croix cases, and 12 packs of soda. They had to wheel three shopping carts out of a Ralph’s grocery store and deliver them — and earned $12.68 for the job. The customer did not tip. (Under the older, more transparent pay model, they would have earned $44.19.) “That’s a real slap in the face,” they told Motherboard.

Installing Air Filters in Classrooms Has Surprisingly Large Educational Benefits

An emergency situation that turned out to be mostly a false alarm led a lot of schools in Los Angeles to install air filters, and something strange happened: Test scores went up. By a lot. And the gains were sustained in the subsequent year rather than fading away.

That’s what NYU’s Michael Gilraine finds in a new working paper titled “Air Filters, Pollution, and Student Achievement” that looks at the surprising consequences of the Aliso Canyon gas leak in 2015. The impact of the air filters is strikingly large given what a simple change we’re talking about. The school district didn’t reengineer the school buildings or make dramatic education reforms; they just installed $700 commercially available filters that you could plug into any room in the country. But it’s consistent with a growing literature on the cognitive impact of air pollution, which finds that everyone from chess players to baseball umpires to workers in a pear-packing factory suffer deteriorations in performance when the air is more polluted.

If Gilraine’s result holds up to further scrutiny, he will have identified what’s probably the single most cost-effective education policy intervention — one that should have particularly large benefits for low-income children. And while it’s too hasty to draw sweeping conclusions on the basis of one study, it would be incredibly cheap to have a few cities experiment with installing air filters in some of their schools to get more data and draw clearer conclusions about exactly how much of a difference this makes.

Welcome To Walmart: The Robot Will Grab Your Groceries

Walmart is testing back-of-store automated systems that can collect 800 products an hour, 10 times as many as a store worker. In the backroom of a Walmart store in Salem, N.H., is a floor-to-ceiling robotic system that the country’s largest retailer hopes will help it sell more groceries online. Workers stand on platforms in front of screens assembling online orders of milk, cereal and toilet paper from the hulking automated system. Wheeled robots carrying small baskets move along metal tracks to collect those items. They are bagged for pickup later by shoppers or delivery to homes. Walmart is one of several grocers including Albertsons and Kroger that are using automation to improve efficiency in a fast-growing but costly business that comes with a range of logistical challenges.

The backroom robots could help Walmart cut labor costs and fill orders faster and more accurately. It also could address another problem: unclogging aisles that these days can get crowded with clerks picking products for online orders. A store worker can collect around 80 products from store shelves an hour, estimated John Lert, founder and chief executive of Alert Innovation, the startup that has worked with Walmart to design the system dubbed Alphabot. It is designed to collect 800 products an hour per workstation, operated by a single individual, Mr. Lert said. Workers stock the 24-foot-high machine each day with the products most often ordered online, including refrigerated and frozen foods. Fresh produce is still picked by hand in store aisles.

‘I Oversaw America’s Nuclear Power Industry. Now I Think It Should Be Banned.’

Friday the Washington Post published an essay by Gregory Jaczko, who served on America’s Nuclear Regulatory Commission from 2005 to 2009 and was its chairman from 2009 to 2012. He says he’d believed nuclear power was worth the reduction they produced in greenhouse emissions — until Japan’s 2011 nuclear meltdown at the Fukushima power plant.

“Despite working in the industry for more than a decade, I now believe that nuclear power’s benefits are no longer enough to risk the welfare of people living near these plants…”

The current and potential costs — personal and economic — are just too high…. The technology and the safety needs are just too complex and demanding to translate into a facility that is simple to design and build. No matter your views on nuclear power in principle, no one can afford to pay this much for two electricity plants. New nuclear is simply off the table in the United States….

Fewer than 10 of Japan’s 50 reactors have resumed operations, yet the country’s carbon emissions have dropped below their levels before the accident. How? Japan has made significant gains in energy efficiency and solar power…. What about the United States? Nuclear accounts for about 19 percent of U.S. electricity production and most of our carbon-free electricity. Could reactors be phased out here without increasing carbon emissions? If it were completely up to the free market, the answer would be yes, because nuclear is more expensive than almost any other source of electricity today. Renewables such as solar, wind and hydroelectric power generate electricity for less than the nuclear plants under construction in Georgia, and in most places, they produce cheaper electricity than existing nuclear plants that have paid off all their construction costs…

This tech is no longer a viable strategy for dealing with climate change, nor is it a competitive source of power. It is hazardous, expensive and unreliable, and abandoning it wouldn’t bring on climate doom. The real choice now is between saving the planet or saving the dying nuclear industry. I vote for the planet.

Skype Audio Graded by Workers in China With ‘No Security Measures’

A Microsoft program to transcribe and vet audio from Skype and Cortana, its voice assistant, ran for years with “no security measures,” according to a former contractor who says he reviewed thousands of potentially sensitive recordings on his personal laptop from his home in Beijing over the two years he worked for the company.

The recordings, both deliberate and accidentally invoked activations of the voice assistant, as well as some Skype phone calls, were simply accessed by Microsoft workers through a web app running in Google’s Chrome browser, on their personal laptops, over the Chinese internet, according to the contractor. Workers had no cybersecurity help to protect the data from criminal or state interference, and were even instructed to do the work using new Microsoft accounts all with the same password, for ease of management, the former contractor said. Employee vetting was practically nonexistent, he added.

“There were no security measures, I don’t even remember them doing proper KYC [know your customer] on me. I think they just took my Chinese bank account details,” he told the Guardian. While the grader began by working in an office, he said the contractor that employed him “after a while allowed me to do it from home in Beijing. I judged British English (because I’m British), so I listened to people who had their Microsoft device set to British English, and I had access to all of this from my home laptop with a simple username and password login.” Both username and password were emailed to new contractors in plaintext, he said, with the former following a simple schema and the latter being the same for every employee who joined in any given year.

Earth Is Hotter Than at Any Time Since Steam Engine Was Invented

The last five years on Earth have been hotter than at any time since the industrial revolution kicked off almost two centuries ago.

That’s the conclusion of Europe’s Copernicus Climate Change Service, which published data on Wednesday showing that global average temperatures since 2015 were some 1.2 degrees Celsius (2.2 degrees Fahrenheit) higher than when steam engines began powering industry. Last year was the second warmest on record after 2016.

2019 was Europe’s warmest year, marginally higher than temperatures in 2014, 2015 and 2018. Global average temperatures in 2019 were 0.6 degrees Celsius warmer than the 1981 to 2010 average. Atmospheric carbon dioxide concentration increased by about 2.3 parts per million in 2019, to the second-highest level on record.

Ring Fired Employees for Watching Customer Videos

Amazon-owned home security camera company Ring has fired employees for improperly accessing Ring users’ video data, Motherboard reported Wednesday, citing a letter the company wrote to Senators. The news highlights a risk across many different tech companies: employees may abuse access granted as part of their jobs to look at customer data or information. In Ring’s case this data can be particularly sensitive though, as customers often put the cameras inside their home. “We are aware of incidents discussed below where employees violated our policies,” the letter from Ring, dated January 6th, reads. “Over the last four years, Ring has received four complaints or inquiries regarding a team member’s access to Ring video data,” it continues. Ring explains that although each of these people were authorized to view video data, their attempted access went beyond what they needed to access for their job.

Companies Are Using AI-Generated People To Appear More “Diverse”

AI startups are selling images of computer-generated faces that look like the real thing, offering companies a chance to create imaginary models and “increase diversity” in their ads without needing human beings. One firm is offering to sell diverse photos for marketing brochures and has already signed up clients, including a dating app that intends to use the images in a chatbot. Another company says it’s moving past AI-generated headshots and into the generation of full, fake human bodies as early as this month. The AI software used to create such faces is freely available and improving rapidly, allowing small start-ups to easily create fakes that are so convincing they can fool the human eye. The systems train on massive databases of actual faces, then attempt to replicate their features in new designs. But AI experts worry that the fakes will empower a new generation of scammers, bots and spies, who could use the photos to build imaginary online personas, mask bias in hiring and damage efforts to bring diversity to industries. The fact that such software now has a business model could also fuel a greater erosion of trust across an Internet already under assault by disinformation campaigns, “deepfake” videos and other deceptive techniques.

Xiaomi Camera Feed is Showing Random Homes on a Google Nest Hub, Including Still Images of Sleeping People

So-called “smart” security cameras have had some pretty dumb security problems recently, but a recent report regarding a Xiaomi Mijia camera linked to a Google Home is especially disturbing. One Xiaomi Mijia camera owner is getting still images from other random peoples’ homes when trying to stream content from his camera to a Google Nest Hub. The images include sills of people sleeping (even an infant in a cradle) inside their own homes. This issue was first reported by user /r/Dio-V on Reddit and affects his Xiaomi Mijia 1080p Smart IP Security Camera, which can be linked to a Google account for use with Google/Nest devices through Xiaomi’s Mi Home app/service. It isn’t clear when Dio-V’s feed first began showing these still images into random homes or how long the camera was connected to his account before this started happening. He does state that both the Nest Hub and the camera were purchased new. The camera was noted as running firmware version 3.5.1_00.66.