File size: 4,824 Bytes
bd9ae94
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
dde2632
 
 
9a1b5d2
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
ad2bfd4
 
 
 
 
 
9a1b5d2
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
---
license: mit
task_categories:
- text-classification
- token-classification
language:
- zh
- ja
- pt
- fr
- de
- ru
tags:
- toxicity
- hatespeech
pretty_name: Multi-Lingual Social Network Toxicity
size_categories:
- 100K<n<1M
---


# MLSNT: Multi-Lingual Social Network Toxicity Dataset

**MLSNT** is a multi-lingual dataset for toxicity detection created through a large language model-assisted label transfer pipeline. It enables efficient and scalable moderation across languages and platforms, and is built to support span-level and category-specific classification for toxic content.

This dataset is introduced in the following paper:

> **Unified Game Moderation: Soft-Prompting and LLM-Assisted Label Transfer for Resource-Efficient Toxicity Detection**  
> πŸ† Accepted at **KDD 2025**, Applied Data Science Track

---

## 🧩 Overview

MLSNT harmonizes 15 publicly available toxicity datasets across **7 languages** using GPT-4o-mini to create consistent binary and fine-grained labels. It is suitable for both training and evaluating toxicity classifiers in multi-lingual, real-world moderation systems.

---

## 🌍 Supported Languages

- πŸ‡«πŸ‡· French (`fr`)
- πŸ‡©πŸ‡ͺ German (`de`)
- πŸ‡΅πŸ‡Ή Portuguese (`pt`)
- πŸ‡·πŸ‡Ί Russian (`ru`)
- πŸ‡¨πŸ‡³ Simplified Chinese (`zh-cn`)
- πŸ‡ΉπŸ‡Ό Traditional Chinese (`zh-tw`)
- πŸ‡―πŸ‡΅ Japanese (`ja`)

---

## πŸ—οΈ Construction Method

1. **Source Datasets**  
   15 human-annotated datasets were gathered from `hatespeechdata.com` and peer-reviewed publications.

2. **LLM-Assisted Label Transfer**  
   GPT-4o-mini was prompted to re-annotate each instance into a unified label schema. Only examples where human and LLM annotations agreed were retained.

3. **Toxicity Categories**  
   Labels are fine-grained categories (e.g., `threat`, `hate`, `harassment`).

---

## πŸ“Š Dataset Statistics

| Language             | Total Samples | % Discarded | Toxic % (Processed) |
|----------------------|----------------|-------------|----------------------|
| German (HASOC, etc.) | ~13,800        | 28–69%      | 32–56%               |
| French (MLMA)        | ~3,200         | 20%         | 94%                  |
| Russian              | ~14,300        | ~40%        | 33–54%               |
| Portuguese           | ~21,000        | 20–44%      | 26–50%               |
| Japanese             | ~2,000         | 10–25%      | 17–45%               |
| Chinese (Simplified) | ~34,000        | 29–46%      | 48–61%               |
| Chinese (Traditional)| ~65,000        | 37%         | ~9%                  |

---

## πŸ’Ύ Format

Each row in the dataset includes:

- `full_text`: The original utterance or message.
- `start_string_index`: A list of start string indices (start positions of toxic spans).
- `end_string_index`: A list of end string indices (end positions of toxic spans).
- `category_id`: A list of toxic category IDs (integer values).
- `final_label`: A list of toxic category names (string values).
- `min_category_id`: The minimum toxic category ID in the row (used as the primary label).
- `match_id`: A unique identifier composed of the original dataset name and a row-level ID.

---

## πŸ—‚οΈ Category ID Mapping

| ID  | Friendly Name                               |
|-----|---------------------------------------------|
| 0   | Non Toxic                                   |
| 1   | Threats (Life Threatening)                  |
| 2   | Minor Endangerment                          |
| 3   | Threats (Non-Life Threatening)              |
| 4   | Hate                                        |
| 5   | Sexual Content / Harassment                |
| 6   | Extremism                                   |
| 7   | Insults                                     |
| 8   | Controversial / Potentially Toxic Topic     |

---

## πŸ”¬ Applications

- Fine-tuning multi-lingual moderation systems
- Cross-lingual toxicity benchmarking
- Training span-level and category-specific toxicity detectors
- Studying LLM label transfer reliability and agreement filtering

---

## πŸ™ Acknowledgments

We thank **Ubisoft La Forge**, **Ubisoft Montreal**, and the **Ubisoft Data Office** for their technical support and valuable feedback throughout this project.

This work was supported by **Ubisoft**, the **CIFAR AI Chair Program**, and the **Natural Sciences and Engineering Research Council of Canada (NSERC)**.

## πŸ“œ Citation

If you use MLSNT in academic work, please cite:

```bibtex
@inproceedings{yang2025mlsnt,
  title={Unified Game Moderation: Soft-Prompting and LLM-Assisted Label Transfer for Resource-Efficient Toxicity Detection},
  author={Zachary Yang and Domenico Tullo and Reihaneh Rabbany},
  booktitle={Proceedings of the 30th ACM SIGKDD Conference on Knowledge Discovery and Data Mining (KDD)},
  year={2025}
}