“Research volunteers encountered a range of regrettable videos, reporting everything from COVID fear-mongering to political misinformation to wildly inappropriate “children’s” cartoons,” Mozilla Foundation wrote in a post.
The largest-ever crowdsourced probe into YouTube’s controversial recommendation algorithm found that the automated software continues to recommend videos viewers considered “disturbing and hateful,” Mozilla said, including ones that violate YouTube’s own content policies.
The study involved nearly 38,000 YouTube users across 91 countries who volunteered data to Mozilla about the “regrettable experiences” they have had on the world’s most popular video content platform. Overall, participants flagged 3,362 regrettable videos between July 2020 and May 2021, with the most frequent “regret” categories being misinformation, violent or graphic content, hate speech, and spam/scams.
Mozilla said that almost 200 videos that YouTube’s algorithm recommended to volunteers have since been removed from the platform, including several that YouTube deemed violated their own policies.