Facebook app on smartphone representing social media platform

The Facebook Live Incident: A Complete Timeline of August 31, 2020

Daniel Carter Author
March 2, 2026 7 min read

On the evening of August 31, 2020, a series of events unfolded that would expose catastrophic failures in social media content moderation, spark a national conversation about platform accountability, and forever change how we think about livestreaming, digital responsibility, and veteran mental health support. This article reconstructs the timeline of that evening based on investigative reporting from the BBC, Rolling Stone, and other credible sources.

Note: This article does not describe graphic content. It focuses on the timeline of events and systemic failures to support suicide prevention advocacy and platform reform.

Background: The Days Before

In the weeks leading up to August 31, 2020, Ronnie McNutt — a 33-year-old U.S. Army Reserve veteran who had served in Iraq — was facing a convergence of life stressors that compounded his existing struggles with PTSD and depression:

  • Relationship breakdown: Ronnie had recently broken up with his girlfriend, removing a key source of emotional support
  • Employment uncertainty: Reports varied on his job status at the Toyota assembly plant in Blue Springs, Mississippi — though Rolling Stone reported he had not actually lost his job, contrary to other media claims
  • COVID-19 isolation: The pandemic had disrupted his church community at Celebration Church Tupelo and reduced social connections
  • PTSD and depression: Ronnie had long struggled with mental health conditions stemming from his service in Iraq in 2007-2008

Ronnie was known for using livestreaming platforms regularly. As his best friend Joshua Steen told Rolling Stone: “He often used a livestreaming platform as his form of therapy. He would get on whatever service it was and just ramble. He liked to talk; he liked to argue with people about theology, geek and pop culture news.”

The Evening: A Detailed Timeline

Approximately 8:00 PM CDT — The Livestream Begins

Ronnie started a Facebook Live broadcast from his apartment in New Albany, Mississippi. Joshua Steen, who had met Ronnie during a community theater production of Footloose and with whom he co-hosted a podcast, noticed the stream.

Within seconds, Steen recognized something was different. According to his account to Rolling Stone, Ronnie “appeared to be heavily inebriated and despondent.” The BBC confirmed he had been drinking that evening.

8:00 PM – 10:00 PM — Growing Concern

During the stream, hundreds of comments poured in from people urging Ronnie to get help. Steen and several of Ronnie’s other friends recognized the escalating danger. At one point, Ronnie appeared to misfire a rifle in the air — a clear violation of Facebook’s community guidelines regarding weapons and self-harm content.

Steen and other friends contacted the police to request a welfare check. Officers eventually arrived outside Ronnie’s apartment.

10:00 PM CDT — The Report to Facebook

At 10:00 PM Mississippi time — approximately two hours into the livestream — Joshua Steen formally reported the broadcast to Facebook for showing someone injuring themselves. According to screengrabs provided to Rolling Stone, this report was timestamped and documented.

Military serviceman using smartphone for social media access
A serviceman accesses social media on a smartphone, illustrating the intersection of military life and digital platforms.
Image: Harland Quarrington, UK Ministry of Defence | Licensed under OGL v1.0 via Wikimedia Commons

This report should have triggered immediate review under Facebook’s policies regarding self-harm and suicide content. Facebook had previously intervened during other livestreamed incidents, and had announced machine learning technology that could detect weapons during livestreams following the 2019 Christchurch mosque shooting.

Approximately 10:30 PM CDT — Ronnie’s Death

Approximately 30 minutes after Steen’s report — and with no response from Facebook — Ronnie McNutt died by suicide during the livestream. More than 200 people were watching at the time, including Steen and several close friends.

11:51 PM CDT — Facebook’s Response

Nearly an hour and a half after Ronnie’s death — and nearly two hours after Steen’s report — Facebook responded. Their message stated that the video did not violate the platform’s community guidelines.

This response came despite Facebook’s own community standards explicitly prohibiting content that “encourages suicide or self-injury.”

The Following Day — The Viral Spread Begins

Because Ronnie’s Facebook account and the video were public, someone downloaded a clip and uploaded it elsewhere — attaching a fabricated backstory. As Steen told the BBC: “Whoever took the first clip and uploaded it created a back story about Ronnie. None of it was true. But it helped fuel the fire to help it spread.”

The clip rapidly spread to TikTok, where the platform’s algorithmic recommendation system — which surfaces content based on engagement — pushed it into millions of users’ “For You” feeds. Many users were exposed involuntarily:

  • Some uploads were disguised within innocent-looking videos that suddenly cut to the graphic content
  • Steen’s wife encountered it embedded in a video that opened with puppies
  • Parents worldwide reported their children being exposed — a person in Australia reported their nine-year-old had seen it on TikTok

The Aftermath: Platform Responses

Facebook’s Statement

Facebook eventually issued a statement: “We removed the original video from Facebook last month on the day it was streamed and have used automation technology to remove copies and uploads since that time. We are reviewing how we could have taken down the live-stream faster. Our thoughts remain with Ronnie’s family and friends during this difficult time.”

However, this statement was contradicted by Steen’s documented experience of continuing to find the video and encountering Facebook responses that it “doesn’t go against one of our Community Standards.”

TikTok’s Response

TikTok stated: “Our systems have been automatically detecting and flagging these clips which violate our policies against content that displays, praises, glorifies, or promotes suicide. We are removing content and banning accounts that repeatedly try to upload clips.”

The Bot Campaign

Suicide prevention awareness community sign
A community suicide prevention awareness sign, emphasizing the importance of prevention resources in the aftermath of the incident.
Image: Kenneth Allen via Wikimedia Commons | Licensed under CC BY-SA 2.0 via Wikimedia Commons

Steen reported that bots appeared to be systematically spreading the content: “I watched it in real time. We’d report an account and then it created another account. We saw the exact same accounts post the exact same message over and over.”

Disinformation expert Claire Wardle of First Draft News suggested this could serve to destabilize populations through graphic content or to test platform content removal capabilities — the same pattern observed after the Christchurch massacre.

What Should Have Happened

The timeline reveals multiple points where intervention could have changed the outcome:

  • Automated detection: Facebook’s machine learning systems should have flagged the appearance of a weapon during the livestream — technology the company claimed to have after Christchurch
  • Timely report response: Steen’s 10:00 PM report should have been reviewed within minutes, not hours
  • Stream termination: Facebook had the capability to end the livestream remotely — as Steen argued, this might have diverted Ronnie’s attention and changed the outcome
  • Post-mortem removal: The video should have been removed immediately after the incident, preventing downloads and re-shares
  • Cross-platform coordination: Social media companies should have coordinated to prevent the spread across platforms

The Legacy: #ReformForRonnie

In the wake of these failures, Ronnie’s loved ones created the #ReformForRonnie hashtag campaign, demanding accountability from social media platforms. The campaign calls for:

  • Faster response times for reports of self-harm and violence during livestreams
  • Better AI detection of weapons and violence in real-time video
  • Immediate human review of livestream reports involving potential self-harm
  • Cross-platform coordination to prevent viral spread of graphic content
  • Greater transparency about content moderation response times

Lessons for the Future

The events of August 31, 2020, demonstrated that social media platforms’ content moderation capabilities were — and in many ways remain — inadequate for the challenges of real-time livestreaming. The technology to intervene existed; the systems to deploy it quickly enough did not.

Every minute of delay between a report and a response represents a window where harm can occur and content can spread. For Ronnie McNutt, those minutes had irreversible consequences. For the millions who were exposed to the content afterward, the trauma continues.

The question is not whether platforms can do better — it’s whether they will.

External Sources

This content is for awareness and education. If you or someone you know is in crisis, please call or text 988 for the Suicide & Crisis Lifeline. Veterans can press 1 for specialized support.

Written by

Daniel Carter

Daniel Carter is a veteran affairs correspondent and mental health advocate based in Memphis, Tennessee. A former Army medic, he now dedicates his work to raising awareness about PTSD, veteran suicide prevention, and the impact of social media on mental health. His reporting has been featured in regional and national publications covering military and veteran issues.

View all posts