Datasets:

Modalities:
Audio
Text
Languages:
English
ArXiv:
Tags:
audio
Libraries:
Datasets
License:

You need to agree to share your contact information to access this dataset

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this dataset content.

NonverbalTTS Dataset πŸŽ΅πŸ—£οΈ

DOI Hugging Face

NonverbalTTS is a 17-hour open-access English speech corpus with aligned text annotations for nonverbal vocalizations (NVs) and emotional categories, designed to advance expressive text-to-speech (TTS) research.

Key Features ✨

  • 17 hours of high-quality speech data
  • 10 NV types: Breathing, laughter, sighing, sneezing, coughing, throat clearing, groaning, grunting, snoring, sniffing
  • 8 emotion categories: Angry, disgusted, fearful, happy, neutral, sad, surprised, other
  • Diverse speakers: 2296 speakers (60% male, 40% female)
  • Multi-source: Derived from VoxCeleb and Expresso corpora
  • Rich metadata: Emotion labels, NV annotations, speaker IDs, audio quality metrics

Metadata Schema (metadata.csv) πŸ“‹

Column Description Example
index Unique sample ID ex01_sad_00265
file_name Audio file path wavs/ex01_sad_00265.wav
Emotion Emotion label sad
Initial text Raw transcription "So, Mom, 🌬️ how've you been?"
Annotator response {1,2,3} Refined transcriptions "So, Mom, how've you been?"
Result Final fused transcription "So, Mom, 🌬️ how've you been?"
dnsmos Audio quality score (1-5) 3.936982
duration Audio length (seconds) 3.6338125
speaker_id Speaker identifier ex01
data_name Source corpus Expresso
gender Speaker gender m

NV Symbols: 🌬️=Breath, πŸ˜‚=Laughter, etc. (See Annotation Guidelines)

Loading the Dataset πŸ’»

from datasets import load_dataset

dataset = load_dataset("deepvk/NonverbalTTS")

Annotation Pipeline πŸ”§

  1. Automatic Detection

    • NV detection using BEATs
    • Emotion classification with emotion2vec+
    • ASR transcription via Canary model
  2. Human Validation

    • 3 annotators per sample
    • Filtered non-English/multi-speaker clips
    • NV/emotion validation and refinement
  3. Fusion Algorithm

    • Majority voting for final transcriptions
    • Pyalign-based sequence alignment
    • Multi-annotator hypothesis merging

Benchmark Results πŸ“Š

Fine-tuning CosyVoice-300M on NonverbalTTS achieves parity with state-of-the-art proprietary systems:

Metric NVTTS CosyVoice2
Speaker Similarity 0.89 0.85
NV Jaccard 0.8 0.78
Human Preference 33.4% 35.4%

Use Cases πŸ’‘

  • Training expressive TTS models
  • Zero-shot NV synthesis
  • Emotion-aware speech generation
  • Prosody modeling research

License πŸ“œ

  • Annotations: CC BY-NC-SA 4.0
  • Audio: Adheres to original source licenses (VoxCeleb, Expresso)

Citation πŸ“

@dataset{nonverbaltts2024,
  author = {Borisov Maksim, Spirin Egor, Dyatlova Darya},
  title = {NonverbalTTS: A Public English Corpus of Text-Aligned Nonverbal Vocalizations with Emotion Annotations for Text-to-Speech},
  month = April,
  year = 2025,
  publisher = {Zenodo},
  version = {1.0},
  doi = {10.5281/zenodo.15274617},
  url = {https://zenodo.org/records/15274617}
}
Downloads last month
84