AI Solutions for Online Safety

HateCheck.ai was created by a team of researchers at Rewire, a tech startup building socially responsible AI for online safety. The project was supported by Google’s Jigsaw team.

About The Creators Of HateCheck

HateCheck has been developed by AI researchers and online safety experts working on toxic language detection. It has been published through three peer-reviewed research papers at top NLP conferences.

‘HateCheck: Functional Tests for Hate Speech Detection Models’ was published at ACL 2021. It was co-authored by Paul Röttger, Bertie Vidgen, Dong Nguyen, Zeerak Waseem, Helen Margetts, and Janet Pierrehumbert.

‘Hatemoji: A Test Suite and Adversarially-Generated Dataset for Benchmarking and Detecting Emoji-based Hate‘ is forthcoming at NAACL 2022. It was co-authored by Hannah Rose Kirk, Bertie Vidgen, Paul Röttger, Tristan Thrush, and Scott A. Hale.

‘Multilingual HateCheck’ is forthcoming at the Workshop on Online Abuse and Harms at NAACL 2022. It was co-authored by Paul Röttger, Haitham Seelawi, Debora Nozza, Zeerak Talat, and Bertie Vidgen.

Get In Touch

We’d love to hear how HateCheck is working for you.
Send us your feedback and questions!