Closing Language Gaps in Hate Speech Detection
Global resources for a global problem
Hate speech hurts in any language.
Hate speech is a global phenomenon. But most hate speech research focuses on English language content, which makes it difficult to build more effective hate speech detection models in other languages. Even the social media giants have clear language gaps in their content moderation systems. The result? Billions of non-English speakers across the world are less protected against online hate, and more at risk of suffering from serious harm.
With the new functional tests in Multilingual HateCheck (MHC), we took a small but important step towards closing some of these language gaps.
Functional testing is a powerful method for finding granular weaknesses in hate speech detection models. The original HateCheck paper introduced functional tests for English hate speech detection models. MHC can test ten additional languages: Arabic, Dutch, French, German, Hindi, Italian, Mandarin, Polish, Portuguese and Spanish. This is more languages than any other hate speech dataset so far!
Hate speech detection for hundreds more languages.
To keep people safe from hate and abuse across the world, we need high-quality open-source datasets in many more languages, especially those that have traditionally been under-resourced. We also need more effective approaches to multilingual and cross-lingual language modelling that work well even in low resource settings. If you are working on this, start using MHC today to evaluate the quality of your models.
Get In Touch!
If you want to get involved with expanding
HateCheck even further, please get in touch,
we’d love to hear from you.