Datasets:
NonverbalTTS Dataset π΅π£οΈ
NonverbalTTS is a 17-hour open-access English speech corpus with aligned text annotations for nonverbal vocalizations (NVs) and emotional categories, designed to advance expressive text-to-speech (TTS) research.
Key Features β¨
- 17 hours of high-quality speech data
- 10 NV types: Breathing, laughter, sighing, sneezing, coughing, throat clearing, groaning, grunting, snoring, sniffing
- 8 emotion categories: Angry, disgusted, fearful, happy, neutral, sad, surprised, other
- Diverse speakers: 2296 speakers (60% male, 40% female)
- Multi-source: Derived from VoxCeleb and Expresso corpora
- Rich metadata: Emotion labels, NV annotations, speaker IDs, audio quality metrics
Metadata Schema (metadata.csv
) π
Column | Description | Example |
---|---|---|
index |
Unique sample ID | ex01_sad_00265 |
file_name |
Audio file path | wavs/ex01_sad_00265.wav |
Emotion |
Emotion label | sad |
Initial text |
Raw transcription | "So, Mom, π¬οΈ how've you been?" |
Annotator response {1,2,3} |
Refined transcriptions | "So, Mom, how've you been?" |
Result |
Final fused transcription | "So, Mom, π¬οΈ how've you been?" |
dnsmos |
Audio quality score (1-5) | 3.936982 |
duration |
Audio length (seconds) | 3.6338125 |
speaker_id |
Speaker identifier | ex01 |
data_name |
Source corpus | Expresso |
gender |
Speaker gender | m |
NV Symbols: π¬οΈ=Breath, π=Laughter, etc. (See Annotation Guidelines)
Loading the Dataset π»
from datasets import load_dataset
dataset = load_dataset("deepvk/NonverbalTTS")
Annotation Pipeline π§
Automatic Detection
- NV detection using BEATs
- Emotion classification with emotion2vec+
- ASR transcription via Canary model
Human Validation
- 3 annotators per sample
- Filtered non-English/multi-speaker clips
- NV/emotion validation and refinement
Fusion Algorithm
- Majority voting for final transcriptions
- Pyalign-based sequence alignment
- Multi-annotator hypothesis merging
Benchmark Results π
Fine-tuning CosyVoice-300M on NonverbalTTS achieves parity with state-of-the-art proprietary systems:
Metric | NVTTS | CosyVoice2 |
---|---|---|
Speaker Similarity | 0.89 | 0.85 |
NV Jaccard | 0.8 | 0.78 |
Human Preference | 33.4% | 35.4% |
Use Cases π‘
- Training expressive TTS models
- Zero-shot NV synthesis
- Emotion-aware speech generation
- Prosody modeling research
License π
- Annotations: CC BY-NC-SA 4.0
- Audio: Adheres to original source licenses (VoxCeleb, Expresso)
Citation π
@dataset{nonverbaltts2024,
author = {Borisov Maksim, Spirin Egor, Dyatlova Darya},
title = {NonverbalTTS: A Public English Corpus of Text-Aligned Nonverbal Vocalizations with Emotion Annotations for Text-to-Speech},
month = April,
year = 2025,
publisher = {Zenodo},
version = {1.0},
doi = {10.5281/zenodo.15274617},
url = {https://zenodo.org/records/15274617}
}
- Downloads last month
- 84