fairseq S^2: A Scalable and Integrable Speech Synthesis Toolkit
Abstract
fairseq S^2 extends fairseq with text-to-speech models, including autoregressive and non-autoregressive variants, and includes preprocessing tools and automatic metrics for efficient speech synthesis model training and development.
This paper presents fairseq S^2, a fairseq extension for speech synthesis. We implement a number of autoregressive (AR) and non-AR text-to-speech models, and their multi-speaker variants. To enable training speech synthesis models with less curated data, a number of preprocessing tools are built and their importance is shown empirically. To facilitate faster iteration of development and analysis, a suite of automatic metrics is included. Apart from the features added specifically for this extension, fairseq S^2 also benefits from the scalability offered by fairseq and can be easily integrated with other state-of-the-art systems provided in this framework. The code, documentation, and pre-trained models are available at https://github.com/pytorch/fairseq/tree/master/examples/speech_synthesis.
Models citing this paper 16
Browse 16 models citing this paperDatasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 229
Collections including this paper 0
No Collection including this paper