YouTube’s Recommender AI Still a Horror Show, Finds Major Crowdsourced Study

For years YouTube’s video-recommending algorithm has stood accused of fuelling a grab bag of societal ills by feeding users an AI-amplified diet of hate speech, political extremism and/or conspiracy junk/disinformation for the profiteering motive of trying to keep billions of eyeballs stuck to its ad inventory. And while YouTube’s tech giant parent Google has, sporadically, responded to negative publicity flaring up around the algorithm’s antisocial recommendations — announcing a few policy tweaks or limiting/purging the odd hateful account — it’s not clear how far the platform’s penchant for promoting horribly unhealthy clickbait has actually been rebooted. The suspicion remains nowhere near far enough.

New research published today by Mozilla backs that notion up, suggesting YouTube’s AI continues to puff up piles of “bottom-feeding”/low-grade/divisive/disinforming content — stuff that tries to grab eyeballs by triggering people’s sense of outrage, sewing division/polarization or spreading baseless/harmful disinformation — which in turn implies that YouTube’s problem with recommending terrible stuff is indeed systemic; a side effect of the platform’s rapacious appetite to harvest views to serve ads. That YouTube’s AI is still — per Mozilla’s study — behaving so badly also suggests Google has been pretty successful at fuzzing criticism with superficial claims of reform. The mainstay of its deflective success here is likely the primary protection mechanism of keeping the recommender engine’s algorithmic workings (and associated data) hidden from public view and external oversight — via the convenient shield of “commercial secrecy.” But regulation that could help crack open proprietary AI blackboxes is now on the cards — at least in Europe.

439