America’s Funniest Home Surveillance Network Isn’t Funny

Amazon is normalizing neighborhood panopticons by turning its doorbell videos into a TV show. Orwell wouldn’t be laughing.

When smartphones first came on the scene, their built-in cameras were limited to personal use. Then social media sites like Facebook and Instagram created a beast that millions wanted to feed, and photos became a public spectacle. The same phenomenon is happening to doorbell cameras. Initially marketed to make customers feel safer in their homes, their footage is now being uploaded for entertainment. On TikTok, the hashtag Ringdoorbell has more than 2.7 billion views.

Amazon.com Inc., which owns market-dominating Ring, has seen and grabbed a lucrative opportunity, and is contributing to the gradual erosion of our privacy in the process.

On Monday, the company premiered Ring Nation, a television show syndicated across more than 70 American cities. Hosted by the comedian Wanda Sykes and produced by Metro-Goldwyn-Mayer, which Amazon finished buying in March, the 20-minute program features videos captured on smartphones and Amazon’s Ring doorbell cameras, which the company sells for about $105.

66

Negative-prompt AI-Generated Images of Women Generate Gore and Horror

AI image generators like DALL-E and Midjourney have become an especially buzzy topic lately, and it’s easy to see why. Using machine learning models trained on billions of images, the systems tap into the allure of the black box, creating works that feel both alien and strangely familiar. Naturally, this makes fertile ground for all sorts of AI urban legends, since nobody can really explain how the complex neural networks are ultimately deciding on the images they create. The latest example comes from an AI artist named Supercomposite, who posted disturbing and grotesque generated images of a woman who seems to appear in response to certain queries.

The woman, whom the artist calls “Loab,” was first discovered as a result of a technique called “negative prompt weights,” in which a user tries to get the AI system to generate the opposite of whatever they type into the prompt. To put it simply, different terms can be “weighted” in the dataset to determine how likely they will be to appear in the results. But by assigning the prompt a negative weight, you essentially tell the AI system, “Generate what you think is the opposite of this prompt.” In this case, using a negative-weight prompt on the word “Brando” generated the image of a logo featuring a city skyline and the words “DIGITA PNTICS.” When Supercomposite used the negative weights technique on the words in the logo, Loab appeared. “Since Loab was discovered using negative prompt weights, her gestalt is made from a collection of traits that are equally far away from something,” Supercomposite wrote in a thread on Twitter. “But her combined traits are still a cohesive concept for the AI, and almost all descendent images contain a recognizable Loab.”

The images quickly went viral on social media, leading to all kinds of speculation on what could be causing the unsettling phenomenon. Most disturbingly, Supercomposite claims that generated images derived from the original image of Loab almost universally veer into the realm of horror, graphic violence, and gore. But no matter how many variations were made, the images all seem to feature the same terrifying woman. “Through some kind of emergent statistical accident, something about this woman is adjacent to extremely gory and macabre imagery in the distribution of the AI’s world knowledge,” Supercomposite wrote.

62