Resources for Trust and Safety
Trust & Safety Insights
AI-Generated CSAM (AIG-CSAM): Essential Considerations for Trust & Safety
While the benefits of generative AI are being realized more every day; like any powerful tool, the threat of misuse and ability to cause harm is equally high and already being exposed. Learn the types of challenges related to and recommendations to mitigate AIG-CSAM.
Case study: GIPHY x Safer
Since deploying Safer’s hash matching service in 2021, GIPHY has detected and deleted 400% more CSAM files than in previous years and has only received one single confirmed user report of CSAM through its reporting tools.
The importance of child safety red teaming for your AI company
While the benefits of generative AI are being realized more every day, like any powerful tool, the threat of misuse and ability to cause harm is equally high. With the right approach to AI development, some of these threats can be mitigated and better contained. Red teaming is one key mitigation.
How youth experience community platforms and bad actors co-opt them
We asked 8,000 youth what they experience online: They describe grooming, sharing nudes, insufficient safety tools. Bad actors are adept at exploiting online communities to target kids and share child sexual abuse material (CSAM). Learn how to safeguard your users and your platform.
Unmasking the perpetrators online: How bad actors use your platform to target kids
Today, bad actors who target children online are driven by a range of motivations, but they all exploit platforms and features in similar ways. Learn about some of the most common types of child sexual perpetrators who may be lurking on your platform.
Case study: VSCO x Safer
Since its founding, VSCO has believed in proactively protecting the wellbeing of its global community of 200 million users (or creators), who upload photos and videos to its platform every day. VSCO wanted to enhance its tooling to prevent the distribution of CSAM. For this, it needed an industry-leading detection tool.
Child safety policy improvement cycle
Child safety policies should be regularly updated to keep pace with emerging risks. Internal assessments are a key way to do just that, creating important feedback loops that support continuous improvement of your policy and ensuring it stays relevant over time.
2023 Safer impact report
In 2023, with the help of our customers, we made progress toward our goal of eliminating child sex abuse material (CSAM) from the internet with Safer, our comprehensive CSAM detection solution.
10 considerations for creating effective safety and reporting tools for youth
Our research shows that youth are far more likely to use platform tools to report online sexual abuse than they are to seek support from a trusted adult offline.
In fact, reporting may be the only time a child signals that something bad is happening to them online, underscoring the critical need for simple and easy-to-use safety tools.