Back to datasets
Dataset assetOpen Source CommunityLanguage DetectionToxicity Identification
SEACrowd/toxicity_200
Toxicity-200 is a vocabulary list for detecting toxic content in 200 languages. It includes common profanity, insulting terms, hate speech, pornographic terms, and body‑part terms related to sexual activity. Supported languages include ind, ace, bjn, bug, jav.
Source
hugging_face
Created
Nov 28, 2025
Updated
Jun 24, 2024
Signals
176 views
Availability
Linked source ready
Overview
Dataset description and usage context
Toxicity‑200 Dataset Overview
Dataset Summary
Toxicity‑200 is a vocabulary list for detecting toxic words in 200 languages. The listed words and phrases are generally considered toxic because they represent:
- Frequently used profanity;
- Frequently used insulting and hateful language, or language used for bullying, demeaning, or disparaging;
- Pornographic terms;
- Body‑part terms related to sexual activity.
Supported Languages
- ind
- ace
- bjn
- bug
- jav
Dataset Usage
Using the datasets library
from datasets import load_dataset
dset = datasets.load_dataset("SEACrowd/toxicity_200", trust_remote_code=True)
Using the seacrowd library
import seacrowd as sc
# Load with default configuration
dset = sc.load_dataset("toxicity_200", schema="seacrowd")
# List all available subset (configuration names)
print(sc.available_config_names("toxicity_200"))
# Load a specific configuration
dset = sc.load_dataset_by_config_name(config_name="<config_name>")
Dataset Homepage
https://github.com/facebookresearch/flores/blob/main/toxicity
Dataset Versions
- Source version: 1.0.0
- SEACrowd version: 2024.06.20
License
CC‑BY‑SA 4.0
Citation
If you use the Toxicity‑200 data loader in your work, please cite:
@article{nllb2022,
author = {NLLB Team, Marta R. Costa‑jussà, James Cross, Onur Çelebi, Maha Elbayad, Kenneth Heafield, Kevin Heffernan, Elahe Kalbassi, Janice Lam, Daniel Licht, Jean Maillard, Anna Sun, Skyler Wang, Guillaume Wenzek, Al Youngblood, Bapi Akula, Loic Barrault, Gabriel Mejia Gonzalez, Prangthip Hansanti, John Hoffman, Semarley Jarrett, Kaushik Ram Sadagopan, Dirk Rowe, Shannon Spruit, Chau Tran, Pierre Andrews, Necip Fazil Ayan, Shruti Bhosale, Sergey Edunov, Angela Fan, Cynthia Gao, Vedanuj Goswami, Francisco Guzmán, Philipp Koehn, Alexandre Mourachko, Christophe Ropers, Safiyyah Saleem, Holger Schwenk, Jeff Wang},
title = {No Language Left Behind: Scaling Human‑Centered Machine Translation},
year = {2022}
}
@article{lovenia2024seacrowd,
title={SEACrowd: A Multilingual Multimodal Data Hub and Benchmark Suite for Southeast Asian Languages},
author={Holy Lovenia and Rahmad Mahendra and Salsabil Maulana Akbar and Lester James V. Miranda and Jennifer Santoso and Elyanah Aco and Akhdan Fadhilah and Jonibek Mansurov and Joseph Marvin Imperial and Onno P. Kampman and Joel Ruben Antony Moniz and Muhammad Ravi Shulthan Habibi and Frederikus Hudi and Railey Montalan and Ryan Ignatius and Joanito Agili Lopo and William Nixon and Börje F. Karlsson and James Jaya and Ryandito Diandaru and Yuze Gao and Patrick Amadeus and Bin Wang and Jan Christian Blaise Cruz and Chenxi Whitehouse and Ivan Halim Parmonangan and Maria Khelli and Wenyu Zhang and Lucky Susanto and Reynard Adha Ryanda and Sonny Lazuardi Hermawan and Dan John Velasco and Muhammad Dehan Al Kautsar and Willy Fitra Hendria and Yasmin Moslem and Noah Flynn and Muhammad Farid Adilazuarda and Haochen Li and Johanes Lee and R. Damanhuri and Shuo Sun and Muhammad Reza Qorib and Amirbek Djanibekov and Wei Qi Leong and Quyet V. Do and Niklas Muennighoff and Tanrada Pansuwan and Ilham Firdausi Putra and Yan Xu and Ngee Chia Tai and Ayu Purwarianti and Sebastian Ruder and William Tjhi and Peerat Limkonchotiwat and Alham Fikri Aji and Sedrick Keh and Genta Indra Winata and Ruochen Zhang and Fajri Koto and Zheng‑Xin Yong and Samuel Cahyawijaya},
year={2024},
eprint={2406.10118},
journal={arXiv preprint arXiv:2406.10118}
}
Need downstream help?
Pair the dataset with AI analysis and content workflows.
Once the source passes your review, move straight into summarization, transformation, report drafting, or presentation generation with the JuheAI toolchain.