Data center server room powering social media platforms

How Social Media Algorithms Push Harmful Content — And How To Take Back Control

Daniel Carter Author
March 2, 2026 4 min read

Every time you open a social media app, an algorithm is deciding what you see. These recommendation systems are designed to maximize engagement — and research increasingly shows that what keeps people engaged is often what upsets, outrages, or disturbs them most. Understanding how these algorithms work is the first step toward taking back control of your digital life.

How Social Media Algorithms Work

At their core, social media algorithms track every interaction you make — likes, shares, comments, how long you pause on a post, what you search for, and even what you scroll past quickly. This data builds a profile of your interests and emotional triggers, which the algorithm then uses to serve content designed to keep you on the platform as long as possible.

The Engagement Trap

Research from multiple universities has demonstrated that content triggering strong emotional reactions — anger, fear, outrage, sadness — generates significantly more engagement than neutral or positive content. This means algorithms naturally amplify the most emotionally charged material, including graphic violence and disturbing imagery, misinformation that provokes outrage, divisive political content, and content that triggers anxiety or despair.

The Facebook Live incident involving Ronnie McNutt in 2020 demonstrated how algorithmic amplification can spread graphic content far beyond its original audience, reaching millions of people — including children — before platforms intervened.

Doomscrolling: The Algorithmic Rabbit Hole

Doomscrolling — the compulsive consumption of negative news and content — is partly a human tendency and partly an algorithmic creation. When you engage with negative content, the algorithm interprets this as a preference and serves more of it, creating a feedback loop that can significantly worsen anxiety and depression.

Facebook social media platform interface
The Facebook platform interface, one of the major social media networks whose algorithms influence content visibility.
Image: Facebook via Wikimedia Commons | Public domain via Wikimedia Commons

How To Take Back Control

1. Audit Your Feed

Spend 10 minutes scrolling through each of your social media feeds and honestly assess how the content makes you feel. If more than half of what you see leaves you feeling anxious, angry, or drained, it is time for a reset.

2. Train Your Algorithm

Algorithms learn from your behavior, which means you can retrain them. Actively engage with positive, informative, or uplifting content. Use the “not interested” or “see less” option on content that disturbs you. Unfollow or mute accounts that consistently post harmful content. Follow accounts focused on your hobbies, education, or personal growth.

3. Use Platform Safety Tools

Most major platforms offer content filtering tools that many users never discover. Instagram and TikTok have sensitive content controls. YouTube allows you to remove specific channels from recommendations. Facebook lets you snooze or unfollow without unfriending. Twitter/X has muted words and content preferences. For a deeper guide on these tools, see our comprehensive digital safety guide.

4. Set Time Boundaries

TikTok viral content representing algorithm-driven platforms
Viral TikTok content, illustrating how algorithm-driven platforms can amplify both helpful and harmful material.
Image: Gabbybratcher via Wikimedia Commons | Licensed under CC BY-SA 4.0 via Wikimedia Commons

  • Use built-in screen time tools to set daily limits per app
  • Remove social media apps from your home screen
  • Designate phone-free times, especially the first and last hour of your day
  • Turn off non-essential notifications

5. Choose Intentional Consumption

Replace passive scrolling with intentional media consumption. Subscribe to newsletters from trusted sources. Listen to podcasts on topics you care about. Bookmark specific websites rather than relying on algorithm-driven feeds. Set specific times for news consumption rather than constant checking.

Protecting Young People

Children and teenagers are especially vulnerable to algorithmic harm because their brains are still developing emotional regulation skills. Parents should have open conversations about what their children encounter online, use parental controls and content filters, follow and engage with their children on social platforms, model healthy digital habits themselves, and create technology-free family time.

The Bigger Picture

While individual action matters, systemic change is also necessary. Advocacy for algorithmic transparency, stronger content moderation, and platform accountability continues to grow worldwide. By understanding how these systems work, we become better equipped to protect ourselves and push for the changes that will protect everyone.

Written by

Daniel Carter

Daniel Carter is a veteran affairs correspondent and mental health advocate based in Memphis, Tennessee. A former Army medic, he now dedicates his work to raising awareness about PTSD, veteran suicide prevention, and the impact of social media on mental health. His reporting has been featured in regional and national publications covering military and veteran issues.

View all posts