Resources

Negative-prompt AI-Generated Images of Women Generate Gore and Horror

AI image generators like DALL-E and Midjourney have become an especially buzzy topic lately, and it’s easy to see why. Using machine learning models trained on billions of images, the systems tap into the allure of the black box, creating works that feel both alien and strangely familiar. Naturally, this makes fertile ground for all sorts of AI urban legends, since nobody can really explain how the complex neural networks are ultimately deciding on the images they create. The latest example comes from an AI artist named Supercomposite, who posted disturbing and grotesque generated images of a woman who seems to appear in response to certain queries.

The woman, whom the artist calls “Loab,” was first discovered as a result of a technique called “negative prompt weights,” in which a user tries to get the AI system to generate the opposite of whatever they type into the prompt. To put it simply, different terms can be “weighted” in the dataset to determine how likely they will be to appear in the results. But by assigning the prompt a negative weight, you essentially tell the AI system, “Generate what you think is the opposite of this prompt.” In this case, using a negative-weight prompt on the word “Brando” generated the image of a logo featuring a city skyline and the words “DIGITA PNTICS.” When Supercomposite used the negative weights technique on the words in the logo, Loab appeared. “Since Loab was discovered using negative prompt weights, her gestalt is made from a collection of traits that are equally far away from something,” Supercomposite wrote in a thread on Twitter. “But her combined traits are still a cohesive concept for the AI, and almost all descendent images contain a recognizable Loab.”

The images quickly went viral on social media, leading to all kinds of speculation on what could be causing the unsettling phenomenon. Most disturbingly, Supercomposite claims that generated images derived from the original image of Loab almost universally veer into the realm of horror, graphic violence, and gore. But no matter how many variations were made, the images all seem to feature the same terrifying woman. “Through some kind of emergent statistical accident, something about this woman is adjacent to extremely gory and macabre imagery in the distribution of the AI’s world knowledge,” Supercomposite wrote.

238

Amazon scraps secret AI recruiting tool that showed bias against women

An example of how “learning” machines inseparably take in the culture of their architects, ala Lewis Mumford:

“Amazon’s machine-learning specialists uncovered a big problem: their new recruiting engine did not like women. The team had been building computer programs since 2014 to review job applicants’ resumes with the aim of mechanizing the search for top talent, five people familiar with the effort told Reuters. Automation has been key to Amazon’s e-commerce dominance, be it inside warehouses or driving pricing decisions. The company’s experimental hiring tool used artificial intelligence to give job candidates scores ranging from one to five stars — much like shoppers rate products on Amazon, some of the people said. “Everyone wanted this holy grail,” one of the people said. “They literally wanted it to be an engine where I’m going to give you 100 resumes, it will spit out the top five, and we’ll hire those.” But by 2015, the company realized its new system was not rating candidates for software developer jobs and other technical posts in a gender-neutral way. That is because Amazon’s computer models were trained to vet applicants by observing patterns in resumes submitted to the company over a 10-year period. Most came from men, a reflection of male dominance across the tech industry.

[…]

Amazon edited the programs to make them neutral to these particular terms. But that was no guarantee that the machines would not devise other ways of sorting candidates that could prove discriminatory, the people said. The Seattle company ultimately disbanded the team by the start of last year because executives lost hope for the project, according to the people, who spoke on condition of anonymity.

736