On the Effects of Heterogeneous Data Sources on Speech-to-Text Foundation Models
Abstract
OWSM v3.2 enhances the speech-to-text model by improving data quality through filtering and enhancing punctuation and true-casing with an LLM, resulting in better performance with less data.
The Open Whisper-style Speech Model (OWSM) series was introduced to achieve full transparency in building advanced speech-to-text (S2T) foundation models. To this end, OWSM models are trained on 25 public speech datasets, which are heterogeneous in multiple ways. In this study, we advance the OWSM series by introducing OWSM v3.2, which improves on prior models by investigating and addressing the impacts of this data heterogeneity. Our study begins with a detailed analysis of each dataset, from which we derive two key strategies: data filtering with proxy task to enhance data quality, and the incorporation of punctuation and true-casing using an open large language model (LLM). With all other configurations staying the same, OWSM v3.2 improves performance over the OWSM v3.1 baseline while using 15% less training data.
Models citing this paper 11
Browse 11 models citing this paperDatasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 6
Collections including this paper 0
No Collection including this paper