Instagram’s Recommendation Algorithms Are Promoting Pedophile Networks

Accounts found by the researchers are advertised using blatant and explicit hashtags like #pedowhore, #preteensex, and #pedobait. They offer “menus” of content for users to buy or commission, including videos and imagery of self-harm and bestiality. When researchers set up a test account and viewed content shared by these networks, they were immediately recommended more accounts to follow. As the WSJ reports: “Following just a handful of these recommendations was enough to flood a test account with content that sexualizes children.”

In addition to problems with Instagram’s recommendation algorithms, the investigation also found that the site’s moderation practices frequently ignored or rejected reports of child abuse material. The WSJ recounts incidents where users reported posts and accounts containing suspect content (including one account that advertised underage abuse material with the caption “this teen is ready for you pervs”) only for the content to be cleared by Instagram’s review team or told in an automated message […]. The report also looked at other platforms but found them less amenable to growing such networks. According to the WSJ, the Stanford investigators found “128 accounts offering to sell child-sex-abuse material on Twitter, less than a third the number they found on Instagram” despite Twitter having far fewer users, and that such content “does not appear to proliferate” on TikTok. The report noted that Snapchat did not actively promote such networks as it’s mainly used for direct messaging.

In response to the report, Meta said it was setting up an internal task force to address the issues raised by the investigation. “Child exploitation is a horrific crime,” the company said. “We’re continuously investigating ways to actively defend against this behavior.” Meta noted that in January alone it took down 490,000 accounts that violated its child safety policies and over the last two years has removed 27 pedophile networks. The company, which also owns Facebook and WhatsApp, said it’s also blocked thousands of hashtags associated with the sexualization of children and restricted these terms from user searches.

235