The Hidden Cost of Meta's Smart Glasses: How Kenyan Workers Bear the Burden of Intimate Content Review
An investigation reveals how Meta outsources the review of sensitive footage from Ray-Ban smart glasses to Kenyan content moderators, exposing workers to psychological trauma and raising questions about corporate accountability in the global content moderation supply chain.

The Dark Side of Wearable Technology
As Meta expands its Ray-Ban smart glasses ecosystem, a critical question lurks beneath the innovation narrative: who pays the psychological price for moderating intimate footage captured by millions of users? A growing body of evidence suggests the answer lies in Kenya, where content moderators are tasked with reviewing sensitive material—including private moments—for a fraction of Western wages.
This practice represents a troubling evolution in Meta's outsourcing model. While the company has faced documented criticism over content moderator working conditions in Kenya, the addition of smart glasses footage introduces a new dimension of harm—workers are now exposed to intimate, often non-consensual recordings of private moments.
The Scale of the Problem
Meta's Ray-Ban smart glasses allow users to record video continuously. While the company claims safety features exist, the moderation burden falls on human workers who must review flagged content. According to investigations into Meta's content moderation practices, Kenyan moderators have reported severe psychological trauma, with over 140 workers diagnosed with PTSD-like symptoms in previous cases.
The smart glasses footage adds complexity:
- Volume: Continuous recording generates exponentially more content to review
- Intimacy: Footage captures private moments—bedrooms, bathrooms, intimate relationships
- Consent issues: Users may not fully disclose they're recording, raising ethical concerns
- Psychological toll: Moderators experience repeated exposure to sensitive material without adequate mental health support
Onboarding and Training Gaps
Workers hired to review this content receive minimal preparation for the psychological demands. Training typically focuses on policy compliance rather than trauma-informed practices. The onboarding process, according to court documents from Kenyan legal challenges to Meta's practices, often fails to adequately prepare workers for exposure to graphic or intimate content.
Key gaps include:
- Limited mental health screening before assignment
- Insufficient break protocols during shifts
- Inadequate counseling resources post-exposure
- No specialized training for handling intimate footage
The Pricing of Human Suffering
Meta's cost-cutting model depends on wage arbitrage. Kenyan moderators earn approximately $0.50–$2 per hour, compared to $15–$25 for equivalent work in the United States. This pricing structure creates perverse incentives: the company maximizes profit by minimizing worker protections in low-income countries.
The financial model is unsustainable from a human rights perspective. Workers absorb the psychological costs of content moderation while bearing the lowest compensation in Meta's global supply chain.
What Practitioners Need to Know
For organizations implementing content moderation at scale, this case study offers critical lessons:
Benefits of ethical moderation:
- Reduced legal liability and reputational risk
- Higher worker retention and productivity
- Compliance with emerging labor standards
Integration considerations:
- Implement trauma-informed moderation protocols
- Establish mental health support as a core service
- Use AI to pre-filter sensitive content before human review
- Conduct regular audits of working conditions
Pricing transparency:
- Factor worker welfare into cost models
- Invest in local mental health infrastructure
- Establish living-wage baselines regardless of geography
The Path Forward
The smart glasses moderation crisis demands systemic change. Meta and similar platforms must acknowledge that outsourcing intimate content review to underpaid workers in developing nations is neither sustainable nor ethical. African content moderators continue to push Big Tech for accountability, and their efforts signal a broader reckoning within the industry.
Organizations building AI-powered moderation systems should learn from these failures: invest in worker dignity, transparent supply chains, and genuine mental health support. The cost of cutting corners on human welfare inevitably surfaces as legal, reputational, and operational risk.


