Thorn Webinar

Human in the Loop: Building Ethical AI for Content Moderation

Wednesday, October 15
2:30 p.m. ET | 11:30 a.m. PT

Join us for an in-depth conversation with trust & safety experts exploring the realities of human-centered AI content moderation. This discussion builds on a panel from the 2025 TrustCon, focusing on how to strike a balance between automation and human judgment to create safer and more ethical digital environments.

 

 

Featured Speakers

rebecca-200x200

Dr. Rebecca Portnoff
VP, Data Science & AI
Thorn

davewilner-400x400

Dave Willner
Co-Founder
Zentropi

Mike Headshot

Mike Pappas
CEO
Modulate

Alice Headshot

Alice Hunsberger
Head of Trust & Safety
Musubi

This insightful conversation on AI content moderation will explore:

  • Model Fairness: Humans can identify when AI models exhibit bias against certain groups, languages, or cultural contexts that automated metrics might miss.
  • Catching Emerging Harms: Human reviewers can spot novel threats before AI systems are trained to recognize them.
  • Preserving Domain Expertise: Human moderators bring contextual understanding, cultural nuance, and subject matter expertise that AI systems often lack.
  • Empowering Human Moderators: How to augment, rather than replace, human judgment.
  • Protecting User Communities: How to ensure both speed (AI handles volume) and accuracy (humans handle complexity) in content moderation.

 

Take part in this discussion

Join us for this essential conversation about keeping humans in the loop for content moderation.

Register now