Protecting Us, Harming Them: Content Moderation's Mental Health Crisis
Every time you scroll through social media, browse YouTube, or participate in online communities, you're benefiting from the invisible labor of content moderators. These digital frontline workers shield us from the worst of what humanity puts online - from graphic violence and exploitation to harassment and hate speech.
There are certainly moderation difficulties for smaller communities: Facebook groups, GroupMes, Listservs, Subreddits, Discord servers all require their own considerations. These spaces often rely on volunteer moderators, managing content within specific community guidelines and shared values. But the challenges multiply exponentially when we look at platform-wide moderation.
The Scale of Digital Sanitation
Content moderation is often imagined as a purely technological process, with AI and algorithms doing the heavy lifting. The reality is far more human. While AI may help to filter and flag obvious violations, human moderators still review millions of posts, images, and videos daily. Meta and TikTok together employ over 80,000 people for content moderation. Yet for Facebook's more than 3 billion users alone, each moderator is responsible for the content of more than 75,000 users.
So if the task is too demanding, why not delegate it all to AI?
Contextual Conundrum
One of the most challenging aspects of content moderation is the nuanced role of context. A photo of a wound might be graphic violence requiring removal, or it could be important documentation of police brutality. A racial slur might be hate speech, or it could be someone reclaiming language within their community. Art, satire, news reporting, and cultural expression all blur these lines further.
And because of scale, moderators must make these complex judgment calls in seconds, often across cultural and linguistic barriers they may not fully understand. A moderator from the Philippines might miss crucial context in a meme from the USA; a younger moderator might not recognize historical references that change a post's meaning entirely. These aren't just decisions about removing "bad" content; they shape online discourse, affect marginalized voices, and influence what stories can be told.
Due to the required nuanced understanding outlined above, moderators are tasked with a singular, bleak topic. Their entire day is sitting in front of a screen consuming this content so end users don't have to. Child pornography, suicide, executions, terrorism... one topic, non-stop.
Almost all content moderation is outsourced to countries such as the Philippines, paying an average of $308 per month for this highly traumatic work.
The Human Cost
The mental health impact on moderators is severe and well-documented:
Post-Traumatic Stress Disorder (PTSD) from repeated exposure to violent and disturbing content
Secondary trauma from witnessing abuse and exploitation
Depression and anxiety from constant exposure to hate speech and harassment
Emotional numbing and desensitization as a coping mechanism
Difficulty maintaining relationships due to the psychological burden of their work
These aren't merely job-related stressors that resolve with a career change or vacation. The trauma embeds itself in moderators' psyches, reshaping how they view the world and interact with technology. Importantly, just because someone leaves their position as a moderator, the trauma likely persists.
Regulation Without Resources
Addressing misinformation, protections for children, and identification of AI content are all hot topics for internet regulation, but none will be accomplished without increases to human moderation efforts. Current moderation efforts are understaffed and undersupported. Companies trying to protect billions of users with AI and underpaid contractors are setting themselves up for failure - and setting us all up for a more dangerous internet.
The challenge is compounded by unstable regulatory frameworks. Tech policy often shifts dramatically with political changes, creating uncertainty around long-term solutions. For instance, Net Neutrality - which treats internet access as an essential service - has been repeatedly enacted and repealed. It was restored earlier this year after being dismantled in 2017, but its future remains uncertain after the 2024 US election.
Similarly, President Biden's recent executive order on AI regulation seeks to address "threats the technology could pose to civil rights, privacy, and national security, while promoting innovation, competition, and the use of AI for public services." But it's likely to be repealed with the new administration.
Either way, the risk isn't just insufficient regulation - it's regulation that ignores the human element entirely. Policies that mandate content removal without addressing moderator working conditions, or that rely too heavily on algorithmic solutions, may actually make the internet less safe while continuing to traumatize an essential workforce.
What Users Can Do
There's no easy answer to the push and pull between protecting users and protecting moderators. We want a safer internet - one where children are shielded from exploitation, where hate speech and harassment don't flourish, where graphic violence doesn't appear unexpectedly in our feeds. But achieving this safety requires real people to witness and process humanity's darkest moments.
What we can do, as users of these platforms, is acknowledge our active role in this ecosystem:
Report harmful content appropriately - take time to provide context rather than flagging everything we disagree with
Be mindful of what we share - consider if graphic videos really needs to be reposted, even with good intentions
Support better working conditions for moderators - through advocacy, policy support, and holding platforms accountable
Advocate for resources and services related to trauma treatment - not just for moderators, but for all digital workers
Remember the human - behind content removal are thousands of real person making difficult decisions, often under immense pressure and time constraints
We might not have solutions to the larger systemic issues yet, but we can work to make the internet more humane - both for those who use it and those who keep it safe.
Wow! That was a pretty dark article. Good news: there's a whole world out there full of incredible people. It's been a historical through-line; humans looking out for each other. Hope you find time off the internet to touch grass (I have a grass allergy, but I hear good things).
Best,
Will Ard
LMSW, MBA