WHEN THE JAN. 6 committee wanted to test how easy it was for TikTok users to wander down a far-right rabbit hole, they tried an experiment. They created Alice, a fictional 41-year-old from Acton, Massachusetts, gave her a TikTok account, and tracked what the social media app showed her.
To their surprise, it only took 75 minutes of scrolling — with no interaction or cues about her interests — for the platform to serve Alice videos featuring Nazi content, following a detour through clips on the Amber Heard-Johnny Depp defamation suit, Donald Trump, and other right-wing culture war flashpoints.
Staff described the exercise as “just one of the Committee’s experiments that further evidenced the power of TikTok’s recommendation algorithm in creating rabbit holes toward potentially harmful content.”
The experiment is detailed in a draft summary of investigative findings prepared by the committee’s social media team and obtained by Rolling Stone. The company mostly escaped notice in the public battles over the role of social media and moderation in combating extremism, including the kind that led to the Capitol attack. But the unpublished summary sheds new light on how the TikTok has grappled with the challenge of “how to moderate misleading content without attracting accusations of censorship,” in particular when “the mis- and disinformation benefitted the political right,” according to staffers.
TikTok did not respond to a request for comment from Rolling Stone.