The Hidden Human Costs Behind Today’s AI

The most striking insights about artificial intelligence rarely come from glossy tech demos or corporate press releases. They come from the people who quietly sit behind laptops labeling images, checking chatbot answers and deciding whether a sentence counts as hate speech. Many of them have a front-row seat to how large models behave before the public does—and they’re increasingly uneasy about what they see.

The Human Cost

Some AI raters say the job reshaped their relationship with technology entirely. One Kenyan worker employed by Sama to train OpenAI’s ChatGPT described recurring nightmares after reading graphic content: “That was torture. You will read a number of statements like that all through the week. By the time it gets to Friday, you are disturbed from thinking through that picture.” These workers were paid between $1.32 and $2 per hour—while OpenAI agreed to pay Sama $12.50 per hour per worker.

The psychological toll is measurable and severe. In December 2024, more than 140 former Facebook content moderators in Kenya were diagnosed with PTSD, anxiety, and depression by Dr. Ian Kanyanya, head of mental health services at Kenyatta National Hospital. One medical record describes a moderator experiencing “cold sweats from frequent nightmares,” “frequent breakdowns, vivid flashbacks and paranoia.” Martha Dark of Foxglove, supporting the legal case, stated: “In Kenya, it traumatized 100% of hundreds of former moderators tested for PTSD.”

A Meta worker in the Philippines employed via Accenture, speaking anonymously, described being traumatized by “a recent uptick in videos of injured and dying children in Gaza, and the grisly aftermath of the Air India crash.” Another content moderator for TikTok said: “Before I would sleep seven hours. Now I sleep four, maximum.” According to a 2025 report, 81% of content moderators believe their employer does not adequately support their mental health.

The Scale of the Problem

The numbers are staggering. An estimated 100,000 people work as commercial content moderators globally. Facebook alone employed 15,000 content moderators as contractors by 2019, while Reddit relies on 60,000 volunteer moderators. Meta’s content operations in Kenya employed 260 workers through Sama before mass redundancies in 2023.

The work happens at the intersection of extreme pressure and minimal support. One moderator described vague instructions and unrealistic deadlines: workers were expected to review 150-250 passages of explicit material per nine-hour shift, with accuracy requirements that could cost them their contracts. “Every piece of content that gets reported on Facebook needs to be evaluated by a moderator,” journalist Casey Newton reported. “And if a moderator makes the wrong call more than a handful of times during the week, their job could be at risk.”

Measurable Degradation

The systems these workers help build are failing in measurable ways. According to NewsGuard’s August 2025 audit, the ten largest AI chatbots dramatically reduced their refusal rates from 31% in August 2024 to 0% in August 2025, while nearly doubling the likelihood of repeating false information from 18% to 35%.

The confident tone remains, but the guardrails appear thinner. Instead of citing data cutoffs or declining sensitive prompts, models now “pull from a polluted online information ecosystem—sometimes deliberately seeded by vast networks of malign actors, including Russian disinformation operations.” Six out of ten major chatbots repeated a fabricated Russian propaganda claim about Moldova’s parliament leader as verified fact.

Historical Parallels: This Is Not New

The content moderation crisis didn’t begin with AI training. The groundwork was laid years ago with social media platforms.

2017-2019: The Facebook Reckoning

After high-profile incidents including a Cleveland murder streamed on Facebook Live and a father in Thailand livestreaming his daughter’s murder, Mark Zuckerberg announced Facebook would hire thousands of moderators. But the conditions were grim from the start.

Chris Gray, who worked as a Facebook moderator in Dublin starting July 2017, described his first day: “It’s one of these very trendy, California-style open offices. Bright yellow emojis painted on the wall. It’s all very pretty, very airy, seems very cool.” But Sean Burke, his colleague, had a different experience: “On the first day, one of my first tickets was watching someone being beaten to death with a plank of wood with nails on it. Within my second week on the job, it was my first time ever seeing child porn.”

By 2018, moderators were filing lawsuits. In September 2018, Selena Scola sued Facebook, alleging the job gave her PTSD. The lawsuit eventually became a class action representing over 14,000 content moderators. In May 2020, Facebook settled for $52 million, agreeing to provide mental health resources including monthly group therapy and weekly one-on-one coaching.

The YouTube Crisis

YouTube faced similar litigation. In 2020, a content moderator sued using the pseudonym “Jane Doe,” describing how viewing content caused “significant psychological trauma including anxiety, depression and symptoms associated with PTSD.” The lawsuit alleged YouTube prohibited moderators from “discussing their work or seeking outside social support, including therapists, psychiatrists, or psychologists,” thereby “impeding the development of resiliency.”

The Pattern Continues

Microsoft faced PTSD lawsuits in 2017. TikTok was sued in 2021 by a moderator claiming psychological trauma. Each time, companies promised improvements. Each time, the outsourcing model—placing the work at arm’s length through contractors in countries with weaker labor protections—remained intact.

The Outsourcing Strategy

The global geography of content moderation reveals deliberate cost-minimization. Workers in Kenya’s Kibera slum—one of Africa’s largest—earn $1.32-$2 per hour doing work that companies claim is worth $30-45 per hour by American standards. The same pattern appears in the Philippines, India, Colombia, and Malaysia.

When workers in Kenya began organizing and filing lawsuits, Meta simply moved operations. After the Sama facility closed in 2023, Meta shifted to a new secret location in Ghana operated by Teleperformance, where conditions are reportedly even worse. Workers there describe suicide attempts, substance abuse, surveillance, and threats. Many are from East Africa on temporary work permits, told to never reveal they moderate for Meta—not even to family.

When TikTok moderators in Malaysia faced similar conditions—reportedly earning around $10 per day to watch “murder, suicide, pedophilia, pornographic content, accidents, cannibalism”—the company’s response in 2024 was to lay off 500-700 workers and replace them with AI.

Labor Organizing Emerges

Workers are fighting back. In April 2025, content moderators from nine countries launched the Global Trade Union Alliance of Content Moderators in Nairobi—the first-ever global union for this workforce. Their demands include:

  • Living wages reflecting the skilled, hazardous nature of the work

  • Limiting daily exposure to traumatic content

  • Elimination of unrealistic productivity quotas

  • 24/7 mental health support for at least two years after leaving the job

  • Workplace democracy and the right to unionize

  • Mental health training for supervisors

  • Migrant worker protections

Michał Szmagaj, a former Meta moderator in Poland, explained: “The pressure to review thousands of horrific videos each day—beheadings, child abuse, torture—takes a devastating toll on our mental health, but it’s not the only source of strain. Precarious contracts and constant surveillance at work add more stress. I’ve seen coworkers cry before coming to work—not just because of the content they’ll see, but because they feared another quality audit that could cost them their contract.”

In Kenya, a court ruled that Meta—not just contractor Sama—is the primary employer, setting a precedent that could disrupt the entire outsourcing model. As Amnesty International noted, this marks “the first time that Meta Platforms Inc will be significantly subjected to a court of law in the global south.”

The Race to Deploy

Speed appears to be the driving force. A 2025 Equidem study based on interviews with 113 content moderators and data labelers across Colombia, Kenya, and the Philippines found systematic violations of International Labour Organisation protections on fair wages, unionization rights, and safeguards against forced labor.

The European Union’s Digital Services Act (2024) and AI Act are creating new accountability frameworks, but enforcement remains weak. Companies continue the pattern: outsource to countries with minimal labor protections, move operations when workers organize, then replace humans with AI systems trained on the judgments of traumatized workers making decisions under impossible time pressure.

One rater summed it up with an old programming adage: “garbage in, garbage out.”

What Changed—and What Didn’t

The fundamental problem hasn’t been solved; it’s been automated. AI doesn’t eliminate the need for human judgment—it obscures it. ChatGPT exists because Kenyan workers labeled horrific content. Your social media feed is “clean” because someone in Manila spent their day watching beheadings. The chatbot that confidently answers your medical question was trained using feedback from raters without medical training, working under productivity quotas that rewarded speed over accuracy.

For workers who see error patterns every day, it’s enough to make them warn family members to avoid chatbots, delay phone upgrades with built-in AI features, or withhold personal data entirely.

Some raters now give presentations in schools, hoping to demystify the technology and highlight its environmental and ethical footprint. Others simply tell friends to ask AI about something they deeply understand, just to witness how confidently wrong it can be.

Toward Accountability

None of this means abandoning AI altogether. It means acknowledging what the marketing glosses over: these systems are fragile, shaped by invisible workers, and deployed into public life faster than society can interrogate them.

Responsible use doesn’t start with trust—it starts with:

Curiosity: Who built this tool? Under what conditions? For whose benefit?

Specificity: Which companies? Name names. OpenAI, Meta, Google, Microsoft, Amazon, TikTok—they all use this model.

Support for workers: The Global Trade Union Alliance provides a starting point. So do organizations like Foxglove and Equidem documenting conditions.

Regulatory pressure: The EU has begun. Other jurisdictions must follow. Courts in Kenya are showing what accountability looks like.

Epistemic humility: These models don’t “know” things. They pattern-match based on training data labeled by workers we’ve never met, working under conditions we’ve rarely questioned.

The question isn’t whether AI is useful—it often is. The question is whether we’re willing to see the labor that makes it possible, and whether we’ll demand that labor be treated with dignity.