Resources for Trust and Safety

24_Safer_POV_AIG-CSAM_Cover

Trust & Safety Insights

AI-Generated CSAM (AIG-CSAM): Essential Considerations for Trust & Safety

While the benefits of generative AI are being realized more every day; like any powerful tool, the threat of misuse and ability to cause harm is equally high and already being exposed. Learn the types of challenges related to and recommendations to mitigate AIG-CSAM.


resources-GIPHYcasestudy

Case study: GIPHY x Safer

GIPHY uses Safer by Thorn to proactively detect CSAM

Since deploying Safer’s hash matching service in 2021, GIPHY has detected and deleted 400% more CSAM files than in previous years and has only received one single confirmed user report of CSAM through its reporting tools.

onlinecommunities-redteaming-1

The importance of child safety red teaming for your AI company  

Understand the potential misuses of gen AI and how to mitigate them

While the benefits of generative AI are being realized more every day, like any powerful tool, the threat of misuse and ability to cause harm is equally high. With the right approach to AI development, some of these threats can be mitigated and better contained. Red teaming is one key mitigation.

onlinecommunities-resources

How youth experience community platforms and bad actors co-opt them

Understand the risks that youth and platforms face, and learn how to protect both

We asked 8,000 youth what they experience online: They describe grooming, sharing nudes, insufficient safety tools. Bad actors are adept at exploiting online communities to target kids and share child sexual abuse material (CSAM). Learn how to safeguard your users and your platform.

onlinecommunities-perpetrators-1

Unmasking the perpetrators online: How bad actors use your platform to target kids

Find out who they are and the tactics they use

Today, bad actors who target children online are driven by a range of motivations, but they all exploit platforms and features in similar ways. Learn about some of the most common types of child sexual perpetrators who may be lurking on your platform.

Case study: VSCO x Safer

VSCO Uses Safer to Protect Its Platform and Community of Creators

Since its founding, VSCO has believed in proactively protecting the wellbeing of its global community of 200 million users (or creators), who upload photos and videos to its platform every day. VSCO wanted to enhance its tooling to prevent the distribution of CSAM. For this, it needed an industry-leading detection tool.

OnlineCommunities-Email3-Policy-600x400

Child safety policy improvement cycle

See the virtuous cycle that maintains a strong child safety policy

Child safety policies should be regularly updated to keep pace with emerging risks. Internal assessments are a key way to do just that, creating important feedback loops that support continuous improvement of your policy and ensuring it stays relevant over time.

2023 Safer impact report

In 2023, with the help of our customers, we made progress toward our goal of eliminating child sex abuse material (CSAM) from the internet with Safer, our comprehensive CSAM detection solution.

OnlineCommunities-ReportingChecklist-LP-600x400

10 considerations for creating effective safety and reporting tools for youth

Ensure your entire user experience supports youth as they navigate stressful situations

Our research shows that youth are far more likely to use platform tools to report online sexual abuse than they are to seek support from a trusted adult offline. 

In fact, reporting may be the only time a child signals that something bad is happening to them online, underscoring the critical need for simple and easy-to-use safety tools.