license: mit
task_categories:
- text-classification
- token-classification
language:
- zh
- ja
- pt
- fr
- de
- ru
tags:
- toxicity
- hatespeech
pretty_name: Multi-Lingual Social Network Toxicity
size_categories:
- 100K<n<1M
MLSNT: Multi-Lingual Social Network Toxicity Dataset
MLSNT is a multi-lingual dataset for toxicity detection created through a large language model-assisted label transfer pipeline. It enables efficient and scalable moderation across languages and platforms, and is built to support span-level and category-specific classification for toxic content.
This dataset is introduced in the following paper:
Unified Game Moderation: Soft-Prompting and LLM-Assisted Label Transfer for Resource-Efficient Toxicity Detection
π Accepted at KDD 2025, Applied Data Science Track
π§© Overview
MLSNT harmonizes 15 publicly available toxicity datasets across 7 languages using GPT-4o-mini to create consistent binary and fine-grained labels. It is suitable for both training and evaluating toxicity classifiers in multi-lingual, real-world moderation systems.
π Supported Languages
- π«π· French (
fr
) - π©πͺ German (
de
) - π΅πΉ Portuguese (
pt
) - π·πΊ Russian (
ru
) - π¨π³ Simplified Chinese (
zh-cn
) - πΉπΌ Traditional Chinese (
zh-tw
) - π―π΅ Japanese (
ja
)
ποΈ Construction Method
Source Datasets
15 human-annotated datasets were gathered fromhatespeechdata.com
and peer-reviewed publications.LLM-Assisted Label Transfer
GPT-4o-mini was prompted to re-annotate each instance into a unified label schema. Only examples where human and LLM annotations agreed were retained.Toxicity Categories
Labels are fine-grained categories (e.g.,threat
,hate
,harassment
).
π Dataset Statistics
Language | Total Samples | % Discarded | Toxic % (Processed) |
---|---|---|---|
German (HASOC, etc.) | ~13,800 | 28β69% | 32β56% |
French (MLMA) | ~3,200 | 20% | 94% |
Russian | ~14,300 | ~40% | 33β54% |
Portuguese | ~21,000 | 20β44% | 26β50% |
Japanese | ~2,000 | 10β25% | 17β45% |
Chinese (Simplified) | ~34,000 | 29β46% | 48β61% |
Chinese (Traditional) | ~65,000 | 37% | ~9% |
πΎ Format
Each row in the dataset includes:
full_text
: The original utterance or message.start_string_index
: A list of start string indices (start positions of toxic spans).end_string_index
: A list of end string indices (end positions of toxic spans).category_id
: A list of toxic category IDs (integer values).final_label
: A list of toxic category names (string values).min_category_id
: The minimum toxic category ID in the row (used as the primary label).match_id
: A unique identifier composed of the original dataset name and a row-level ID.
ποΈ Category ID Mapping
ID | Friendly Name |
---|---|
0 | Non Toxic |
1 | Threats (Life Threatening) |
2 | Minor Endangerment |
3 | Threats (Non-Life Threatening) |
4 | Hate |
5 | Sexual Content / Harassment |
6 | Extremism |
7 | Insults |
8 | Controversial / Potentially Toxic Topic |
π¬ Applications
- Fine-tuning multi-lingual moderation systems
- Cross-lingual toxicity benchmarking
- Training span-level and category-specific toxicity detectors
- Studying LLM label transfer reliability and agreement filtering
π Acknowledgments
We thank Ubisoft La Forge, Ubisoft Montreal, and the Ubisoft Data Office for their technical support and valuable feedback throughout this project.
This work was supported by Ubisoft, the CIFAR AI Chair Program, and the Natural Sciences and Engineering Research Council of Canada (NSERC).
π Citation
If you use MLSNT in academic work, please cite:
@inproceedings{yang2025mlsnt,
title={Unified Game Moderation: Soft-Prompting and LLM-Assisted Label Transfer for Resource-Efficient Toxicity Detection},
author={Zachary Yang and Domenico Tullo and Reihaneh Rabbany},
booktitle={Proceedings of the 30th ACM SIGKDD Conference on Knowledge Discovery and Data Mining (KDD)},
year={2025}
}