
The dataset is currently empty. Upload or create new data files. Then, you will be able to explore them in the Dataset Viewer.
This dataset comprises 166,000 hours of multilingual speech spanning 75 languages, segmented into 30-second long-form audio clips. The data originates from the YODAS2 dataset, which is based on large-scale web-crawled content. Due to the nature of web-sourced data, the original YODAS2 dataset may include inaccurate language labels and misaligned audio-text pairs. Our preliminary experiments indicate that such noise can negatively impact downstream ASR model performance. To address this, we developed a scalable data-cleaning pipeline using publicly available toolkits, resulting in a curated subset of the original dataset. This cleaned dataset forms a core part of the training data for our OWSM v4 models, which, when combined with existing OWSM data, significantly outperform previous versions on multilingual ASR benchmarks.
YODAS Data Cleaning
Please refer to our paper for more details: https://arxiv.org/abs/2506.00338
OWSM v4 series
Encoder-decoder OWSM
Name | Size | Hugging Face Repo |
---|---|---|
OWSM v4 base | 102M | https://huggingface.co/espnet/owsm_v4_base_102M |
OWSM v4 small | 370M | https://huggingface.co/espnet/owsm_v4_small_370M |
OWSM v4 medium | 1.02B | https://huggingface.co/espnet/owsm_v4_medium_1B |
CTC-based OWSM
Name | Size | Hugging Face Repo |
---|---|---|
OWSM-CTC v4 medium | 1.01B | https://huggingface.co/espnet/owsm_ctc_v4_1B |
Citation
@inproceedings{owsm-v4,
title={{OWSM} v4: Improving Open Whisper-Style Speech Models via Data Scaling and Cleaning},
author={Yifan Peng and Shakeel Muhammad and Yui Sudo and William Chen and Jinchuan Tian and Chyi-Jiunn Lin and Shinji Watanabe},
booktitle={Proceedings of the Annual Conference of the International Speech Communication Association (INTERSPEECH) (accepted)},
year={2025},
}
- Downloads last month
- 85