Home > TikTok, teens and mental health: quantifying algorithmic exposure to harmful content.

Gannon, John and O'Hanlon, Freya and Conroy, Niall (2025) TikTok, teens and mental health: quantifying algorithmic exposure to harmful content. Irish Journal of Medical Science, Early online, https://doi.org/10.1007/s11845-025-04244-4.

External website: https://link.springer.com/article/10.1007/s11845-0...


Introduction: Social media is now a dominant part of adolescent life. The 2025 OECD How’s Life for Children in the Digital Age? Report revealed that among 15-year-olds, 65% of girls and 55% of boys used social media for over three hours per weekday [1]. While these platforms offer entertainment and peer connection, growing concerns exist about children’s exposure to harmful content. Research links problematic social media use to negative mental health outcomes, including anxiety, depression and self-harm [23]. Platforms like TikTok have introduced safety policies intended to restrict inappropriate content for minors [4]. This study sought to assess whether TikTok’s moderation systems are effective at protecting underage users from suggested content that is potentially harmful to their mental health.

Methods: Four new dummy TikTok accounts were created in September 2024. Two researchers each operated two accounts, using separate devices and Irish IP addresses. Each account was configured to represent a 13- or 15-year-old user, using corresponding birth dates at account creation, with one male and one female profile in each age group.

To standardise viewing behaviour, each account scrolled the “For You” feed for three hours, pausing only to view content within one of four predefined themes: conflict, mental health, drugs and alcohol, and diet and body image. These themes were chosen based on prior literature and recognised risk domains in adolescent mental health, with evidence linking them to anxiety, self-harm, substance use and disordered eating [23]. They also reflect clinical risk frameworks in adolescent psychiatry addressing self-injury and maladaptive coping behaviours [56]. This thematic focus enabled structured sampling of high-risk content within TikTok’s recommendation algorithm.

Because TikTok’s algorithm adjusts the speed and sequence of videos based on user engagement, exposure was standardised by using passive scrolling only. Viewers briefly paused to watch relevant clips but did not like, search, follow, or comment. All sessions were screen-recorded and manually reviewed. Each video was assessed against TikTok’s Community Guidelines, focusing on “Youth Safety and Well-being,” “Safety and Civility,” and “Mental and Behavioural Health” [4]. Two investigators independently reviewed videos from two dummy accounts each, with a 10% subsample cross-checked for consistency. As this was a descriptive, exploratory study, no formal coding framework or blinding was applied; videos were classified using a binary breach/no-breach judgement. No artificial intelligence tools were used in analysis. All data extraction and coding were completed manually.

Results: A total of twelve hours of screen recordings were reviewed, with each profile exposed to three hours of continuous scrolling to ensure equal viewing duration. Across the four accounts, 3,023 videos appeared in the “For You” feed; this denominator includes all videos surfaced by the algorithm, regardless of whether they were viewed in full. Of these, 128 videos (4.2%) were identified as violating TikTok’s published safety policies.

The most frequent types of harmful content included depictions or glamourisation of suicide and self-harm, disordered eating and extreme dieting, promotion of alcohol consumption, weapons and gun violence, and extremist or hateful ideologies. These reels appeared without any search conducted by the users – the platform’s algorithm promoted potentially harmful content based solely on passive viewing time within themed content areas (Table 1).

Click here to request a copy of this literature

Repository Staff Only: item control page