diff --git "a/MdFRT4oBgHgl3EQf2zi9/content/tmp_files/load_file.txt" "b/MdFRT4oBgHgl3EQf2zi9/content/tmp_files/load_file.txt" new file mode 100644--- /dev/null +++ "b/MdFRT4oBgHgl3EQf2zi9/content/tmp_files/load_file.txt" @@ -0,0 +1,1319 @@ +filepath=/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf,len=1318 +page_content='JOURNAL OF LATEX CLASS FILES, VOL.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' 14, NO.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' 8, AUGUST 2021 1 InstructTTS: Modelling Expressive TTS in Discrete Latent Space with Natural Language Style Prompt Dongchao Yang*, Songxiang Liu*, Rongjie Huang, Guangzhi Lei, Chao Weng, Helen Meng, Fellow, IEEE and Dong Yu, Fellow, IEEE Abstract—Expressive text-to-speech (TTS) aims to synthesize different speaking style speech according to human’s demands.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Nowadays, there are two common ways to control speaking styles: (1) Pre-defining a group of speaking style and using categorical index to denote different speaking style.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' However, there are limitations in the diversity of expressiveness, as these models can only generate the pre-defined styles.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' (2) Using reference speech as style input, which results in a problem that the extracted style information is not intuitive or interpretable.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' In this study, we attempt to use natural language as style prompt to control the styles in the synthetic speech, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=', “Sigh tone in full of sad mood with some helpless feeling”.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Considering that there is no existing TTS corpus which is proper to benchmark this novel task, we first construct a speech corpus, whose speech samples are annotated with not only content transcriptions but also style descriptions in natural language.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Then we propose an expressive TTS model, named as InstructTTS, which is novel in the sense of following aspects: (1) We fully take the advantage of self-supervised learning and cross-modal metric learning, and propose a novel three-stage training procedure to obtain a robust sentence embedding model, which can effectively capture semantic information from the style prompts and control the speaking style in the generated speech.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' (2) We propose to model acoustic features in discrete latent space and train a novel discrete diffusion probabilistic model to generate vector- quantized (VQ) acoustic tokens rather than the commonly-used mel spectrogram.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' (3) We jointly apply mutual information (MI) estimation and minimization during acoustic model training to minimize style-speaker and style-content MI, avoiding possible content and speaker information leakage from the style prompt.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Extensive objective and subjective evaluation has been conducted to verify the effectiveness and expressiveness of InstructTTS.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Experimental results show that InstructTTS can synthesize high- fidelity and natural speech with style prompts controlling the speaking style.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Synthesized samples are available 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Index Terms—Text to speech, prompt-based learning, diffusion model, metric learning I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' INTRODUCTION T EXT-to-speech (TTS) aims to generate human-like speech from input text, which attracts broad interest in the audio and speech processing community.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Nowadays, the state- of-the-art TTS systems [1]–[3] are able to produce natural and high-quality speech.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' However, there still exists a big gap between TTS-synthetic speech and human speech in terms of Dongchao Yang and Helen Meng are with the Chinese University of Hong Kong.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' This work was done when Dongchao Yang was an intern at Tencent AI Lab.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' * denotes equal contribution with order determined by alphabetic order.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Songxiang Liu, Guangzhi Lei, Chao Weng and Dong Yu are with Tencent AI Lab.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Rongjie Huang is with the Zhejiang University, China.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Songxiang Liu is the corresponding author.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' 1http://dongchaoyang.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content='top/InstructTTS/ expressiveness, which limits the broad applications of current speech synthesis systems.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Many researchers now focus on a more challenging task, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=', expressive TTS, which aims to modeling and control the speaking style (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=', emotion, speaking-rate and so on) in the generated speech according to human’s demands.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' We note that there are generally two types of methods in the literature to learn speaking style information: one uses auxiliary categorical style labels as the condition of the framework [4], [5], the other imitates the speaking style of a reference speech [6]–[9].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' However, there are limitations in the diversity of expressiveness when categorical style labels are used, as these models can only generate a few pre-defined styles from the training set.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Although TTS models using a reference utterance to model style generation can be trained in an unsupervised manner and generalizable to out-of-domain speaking styles, style information in the reference speech is not intuitive and interpretable.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Moreover, it is hard to choose a reference speech sample in precise accordance of a user’s demand.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' For the first time, we study the modelling of expressive TTS with style prompt in natural language, where we meet with the following research problems: (1) how to train a language model that can capture semantic information from the natural language prompt and control the speaking style in the generated speech;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' (2) how to design an acoustic model to effectively model the challenging one-to-many learning problem of expressive TTS.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' In this paper, we will address these two challenges.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' The main contributions of this study are summarized as follows: (1) For the first time, we study the modelling of expressive TTS with natural language prompt, which brings us a step closer to achieve user-controllable expressive TTS.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' (2) We introduce a novel three stage training strategy to obtain a robust sentence embedding model, which can effectively capture semantic information from the style prompts.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' (3) Inspired by the success of large-scale language models, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=', GPT3 and ChatGPT [10], we propose to model acoustic features in discrete latent space and cast speech synthesis as a language modeling task.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Specifically, we train a novel discrete diffusion model to generate vector-quantized (VQ) acoustic feature rather than to predict the commonly-used mel- spectrogram.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' (4) We explore to model two types of VQ acoustic fea- ture: mel-spectrogram based VQ features and waveform-based VQ features.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' We prove that the two types of VQ features can be effectively modeled by our proposed novel discrete arXiv:2301.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content='13662v1 [cs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content='SD] 31 Jan 2023 JOURNAL OF LATEX CLASS FILES, VOL.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' 14, NO.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' 8, AUGUST 2021 2 Discrete Diffusion Model Variance Adaptor Positional Encoding Positional Encoding Mel-spectrogram or waveform SALN adaptor Text Embedding Layer Content Prompt Speaker Embedding Layer (a) The overview of InstructTTS.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Speaker ID Style Encoder Style Prompt Mel-spectrogram Phoneme Encoder Content Encoder (b) The details of style encoder.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Mel-VQ-Diffusion transformer decoder Token Embedding Encoder 5 1 7 39 M M 0 M Diffusion process 5 1 7 39 Decoder (c) The details of Mel-VQ-Diffusion.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Text Embedding Layer Content Prompt Audio Encoder Mel-spectrogram Style prompt Adaptor Prompt Encoder Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' (a) shows the model architecture of our proposed InstructTTS.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Where SALN denotes the style-adaptive layer normalization adaptor [14].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' (b) shows the details of our proposed style encoder, which aims to extract style features from GT mel-spectrogram (training stage) or style prompt (inference stage).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' In Figure 1 (c), we give an example of discrete diffusion decoder to generate VQ mel-spectrogram acousic features (we name it as Mel-VQ-Diffusion).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' diffusion model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' We must state that our waveform-based modelling method only needs one-stage training and it is a non-autoregressive model, which is far different from our concurrent work AudioLM [11], VALL-E [12] and MusicLM [13].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' (5) We jointly apply mutual information (MI) estimation and minimization during acoustic model training to minimize style-speaker and style-content MI, which avoiding possi- ble content and speaker information leakage from the style prompt.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' The rest of this paper is organized as follows: In Section II, we motivate our study by introducing the background and related work.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' In Section III, we present the details of datasets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' In Section IV, we introduce the details of our proposed methods.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' The experimental setting, evaluation metrics and results are presented from Section V to Section VII.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' The study is concluded in Section VIII.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' II.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' RELATED WORK AND BACKGROUND This study is built on several previous works on cross- modal representation learning, vector quantization, diffusion probabilistic models and expressive TTS.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' We briefly introduce the related studies to set the stage for our research and rationalize the novelty of our contributions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Cross-modal Representation Learning Cross-modal representation learning aims to learn a com- mon latent space for different modal data (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' text and image, text and speech).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' In general, two different modal encoders are used to extract deep feature representation, then a variety of supervised or unsupervised strategies are devised to align the two modal representation spaces [15]–[17].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' In our study, we expect to control the acoustic features (such as pitch, emotion and speed) by a natural language sentence.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' To realize this target, we turn to cross-modal representation learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' The details will be discussed later.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Vector Quantization Vector quantization technique has been used in various fields, such as image [18]–[20] and speech processing [21]– [24].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' VQ-VAE [18] was proposed to train an encoder to compress the image into a low-dimensional discrete latent space, then a decoder is used to recover the image from a group of discrete tokens.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Inspired by VQ-VAE, a series of works adopt the idea to reconstruct mel-spectrogram or linear-spectrogram [25], [26].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Recently, a lot of works focus on reconstruct waveform by VQ-VAE.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' To supplement the information loss during the VQ process, a residual-VQ (R- VQ) [24] technique is proposed, which uses multiple different codebooks to encode the audio information.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Nowadays, the majority of TTS systems focus on using an acoustic model (AM) to directly predict mel-spectrogram, then uses a pre- trained vocoder to recover waveform from the predicted mel- spectrogram [2], [27], [28].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' However, the mel-spectrogram is highly correlated along both time and frequency axes in a complicated way, leading to a great difficulty for the AM to predict.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Furthermore, the gap between the ground-truth (GT) mel-spectrogram and the predict one from AM degrades the performance due to the vocoder is trained on GT mel- spectrogram.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' In this study, instead of using AM to predict mel- spectrogram, we turn to predict learnable and vector-quantized acoustic representation, which is transformed to a discrete latent space.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Expressive Text-to-speech Expressive TTS models have been studied for decades in the TTS community: Wang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' [6] propose to use global style tokens to control and transfer the global style.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Li et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' [29] adopt a multi-scale style encoder to assist expressive speech synthesis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Min et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' [14] propose Meta-StyleSpeech, which uses a meta-learning training strategy for multi-speaker TTS synthesis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' SC-GlowTTS [30] proposed a speaker-conditional architecture that explores a flow-based decoder in a zero- shot scenario.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Zhou et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' [31] propose a mixed emotion speech synthesis model, which can control multiple different JOURNAL OF LATEX CLASS FILES, VOL.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' 14, NO.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' 8, AUGUST 2021 3 emotions in one synthetic speech sample.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Huang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' [32] propose a multi-level style adaptor to transfer speaking style.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Yang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' [9] propose NoreSpeech, which can robust transfer style information from noisy reference speech.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Liu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' [33] attempt to use robust style descriptors to transfer style learned from low-quality but expressive speech data to a target voice.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' The most related to our work are Style-Tagging-TTS (ST-TTS) [34] and PromptTTS [35].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' ST-TTS proposes to use style tag to guide the speaking style of synthsized speech, where style tag denotes short phrase or word representing the style of an utterance, such as emotion, intention, and tone of voice.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' In this study, we attempt to use longer natural language as style descriptions to control the styles in the synthetic speech, which is more complicated due to longer natural language prompts carry out more abundant semantic information and results in more complicated acoustic characteristic.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Our concurrent work PromptTTS [35] proposed a similar idea with us, using a sentence as style prompt to control the style information in TTS systems.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' They define 5 different style factors (gender, pitch, speaking speed, volume, and emotion), and they assume the prompts have obvious style factor words, such as low-pitch, high-speaking speech and so on, which means that model can get style information from local-level description.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Different from PromptTTS, our study does not apply constraint on the form of the style prompts and allows the user to use any free- form natural language to describe a speaking style, resulting in a much more challenging machine learning problem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Fur- thermore, we focus on Mandarin Chinese TTS and construct the first Mandarin Chinese speech corpus applicable for style- prompt-controllable expressive TTS.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Diffusion Probabilistic Models Diffusion generative models are first proposed in [36] and achieve strong results in image generation [37]–[40] and speech synthesis [41]–[44].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Diffusion models with discrete state spaces are first introduced by Sohl-Dickstein et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' [36], who considered a diffusion process over binary random variables.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Hoogeboom et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' [45] extend the model to categor- ical random variables with transition matrices characterized by uniform transition probabilities.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Jacob et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' [46] further improve and extend discrete diffusion models by using a more structured categorical corruption process to corrupt the forward process.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Many works have successfully applied discrete diffu- sion models in image or sound generation, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=', D3PMs [46] VQ-Diffusion [38], DiffSound [26].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' However, no one attempts to apply the discrete diffusion model to synthesize speech.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' In the following, we briefly review background knowledge of diffusion models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' 1) Vanilla Diffusion Model: A diffusion model consists of forward process and reverse process.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' The forward process attempts to corrupt the original data x0 into the noisy latent variable xT which follows a simple stationary distribution (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=', Gaussian distribution), and the reverse process learns to recover the original data x0 from xT .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Forward process Given the audio data x0, the forward process aims to corrupt the data x0 ∼ q(x0) into a sequence of increasingly noisy latent variables x1:T = x1, x2, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=', xT .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Each of the noisy latent variables xt has the same dimensionality as x0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' The forward process from data x0 to the variable xT can be formulated as a fixed Markov chain q(x1:T |x0) = T � t=1 q(xt|xt−1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' (1) Following [36], Gaussian noise is selected in each step, so that the conditional probability distribution is modeled as q(xt|xt−1) = N(xt;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' √1 − βtxt−1, βtI), where βt is a small positive constant.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' According to the pre-defined noise schedule β1, β2, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=', βT , the overall process gradually converts clean x0 to a latent variable with an isotropic Gaussian distribution of p(xT ) = N(0, I).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Due to the properties of the Markov chain, the probability distribution q(xt|x0) can be conveniently de- rived as q(xt|x0) = N(xt;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' √αtx0, (1 − αt)I), (2) where αt = 1 − βt and αt = �t s=1 αs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Reverse process The reverse process converts the latent variable xT ∼ N(0, I) into x0, whose jointly probability follows pθ(x0:T ) = p(xT ) T � t=1 pθ(xt−1|xt), (3) where pθ(·) is the distribution of the reverse process with learnable parameters θ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' The posterior q(xt−1|xt, x0) can be derived according to the Bayes formula.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' In order to optimize the generative model pθ(x0) to fit the data distribution q(x0), one typically optimizes a variational upper bound on the negative log-likelihood [47].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' 2) Discrete Diffusion model: In discrete diffusion model, a transition probability matrix is defined to indicate how x0 transits to xt for each step of forward process.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Assuming that x0 ∈ ZN and xk 0 ∈ {1, 2, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=', P}, without introducing confu- sion, we omit the superscript k in the following presentation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' The matrices [Qt]ij = q(xt = i|xt−1 = j) ∈ RP ×P defines the probabilities that xt−1 transits to xt.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Then the forward process for the whole token sequence can be written as q(xt|xt−1) = c⊤(xt)Qtc(xt−1), (4) where c(x) denotes the one-hot column vector of x.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' The cate- gorical distribution over xt is given by the vector Qtc(xt−1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Due to the property of Markov chain, one can marginalize out the intermediate steps and derive the probability of xt at arbitrary timestep directly from x0 as q(xt|x0) = c⊤(xt)Qtc(x0), with Qt = Qt .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Q1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' (5) Besides, q(xt−1|xt, x0) can be derived according to the Bayes formula: q(xt−1|xt, x0) = q(xt|xt−1, x0)q(xt−1|x0) q(xt|x0) = � c⊤(xt)Qtc(xt−1) �� c⊤(xt−1)Qt−1c(x0) � c⊤(xt)Qtc(x0) .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' (6) JOURNAL OF LATEX CLASS FILES, VOL.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' 14, NO.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' 8, AUGUST 2021 4 TABLE I EXAMPLE STYLE PROMPTS FROM DIFFERENT CORPORA.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' SINCE NLSPEECH CORPUS IS IN MANDARIN CHINESE, WE ADDITIONALLY PROVIDE THE TRANSLATED VERSION.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' THE FSNR0 IS IN KOREAN, WE PROVIDE THE TRANSLATED ONES IN THE TABLE.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' FSNR0 [34] PromptSpeech [35] NLSpeech (translated) Seem sad A distressful male sound appeared in low volume The tone of the shock question revealed the sad feelings Bitter He sadly turns down his volume,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' pitch and speed It was a fiery expression of disapproval and condemnation,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' with a palpable sense of irony,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' a tinge of disgust and disdain Pleased The ladylike person made an increment of the volume and pitch There was a sense of joy in the words,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' an expression of joy in the heart,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' mixed with pride.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' In a hurry Men, low tone, said loudly and quickly His voice grew more agitated, and his tone revealed an urge and urgency.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' III.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' DATASET We use an internally collected Mandarin Chinese speech corpus named NLSpeech to conduct experimental evaluation since there is no openly available Mandarin Chinese speech corpus with rich style prompts.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' The corpus contains 44 hours of speech data (in total 32k utterances) from 7 speakers (5 female and 2 male).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Audio waveform has a sampling rate of 24kHz.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' We randomly spare 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content='1 hours of data as the validation set, another 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content='1 hours of data as the test set and the remaining data as the training set.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Each utterance has 5 style prompts labeled by different annotators.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' To obtain high- quality annotations, we ask annotators to follow three steps of annotation strategy: Step-1: The annotators first use one word to describe the overall perceived emotion of an utterance;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Step-2: The annotators then listen to the utterance care- fully and describe the emotion level of the utterance with one word;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Step-3: The annotators write a complete sentence in natural language to describe the style of the utterance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Note that we ask annotators to not care about the speech content, which may influence the perception of emotion and style.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Table I shows example style prompts in our dataset, and we also compare NLSpeech with other existing related cor- pora, including the FSNR0 corpus [34] and the PromptSpeech corpus [35]).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' We note that the style prompts in NLSpeech are in free-form natural language sentences which are more consistent with those used in our daily life, while those in the FSNR0 and the PromptSpeech corpora are somewhat constraint to some degree.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Meanwhile, this also brings us a challenging TTS problem since natural language sentences al- low for expressing virtually any concepts.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Compact and infor- mative representation of style prompt is therefore paramount to achieve effective style controlling during speech synthesis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' IV.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' PROPOSED METHOD The overall architecture of the proposed InstructTTS frame- work is demonstrated in Figure 1, which consists of five parts including a content encoder, a style encoder, a speaker encoder, a style-adaptive layer normalization (SALN) adaptor and a discrete diffusion decoder.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' The detailed design of each part will be introduced in this section.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Content Encoder The content encoder aims to extract content representation from the content prompts.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' We follow the architecture of Fast- Prompt Encoder Audio Encoder Style Prompt Audio embedding Style embedding Metric Learning Objective Audio Semantic Hyper-Sphere Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' The model architecture of cross-modal representation learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Speech2, which consists of 4 feed-forward transformer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' The hidden size, number of attention heads, kernel size and filter size of the one-dimensional convolution in the FFT block are set as 256, 2, 9 and 1024, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' After that, a variance adaptor is used to predict information such as duration and pitch that is closely related to the style of synthetic speech.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Style encoder The style encoder module, as shown in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' 1 (b), includes three parts: A pre-trained robust style prompt embedding model, an adapt layer to map the style embedding into a new latent space, an audio encoder that used to encode style information from the target mel-spectrogram.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Note that our pre-trained robust prompt embedding model is fixed when we train our TTS system.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' In the training stage, one of the training target is to minimize the distance between style prompt embedding and audio embedding.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' We note that the audio encoder may encode speaker and content information.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' To make sure the audio encoder only encodes the style- related information, during training, we jointly minimize the style-speaker mutual information (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=', I(ze;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' zsid)) and style- content mutual information (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=', I(ze;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' c)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Mutual information (MI) is a key measurement of correlation between random variables.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' However, MI of high-dimensional random variables with unknown distribution is intractable to compute.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Previous works focus on either estimating the MI lower bound or the MI upper bound.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' MINE [48] and InfoNCE [18] compute a lower bound as the MI estimators while CLUB [49] computes an upper bound as the MI estimator.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' In this work, we use the CLUB method to minimize style-speaker and style-content MI to avoid content and speaker information leakage from the mel-spectrogram during training.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' JOURNAL OF LATEX CLASS FILES, VOL.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' 14, NO.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' 8, AUGUST 2021 5 C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Style Prompt Embedding Model To extract style representation from the style prompts, we adopt a RoBERTa model [50] as prompt embedding model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Assuming we have a style prompt sequence S = [S1, S2, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=', SM], where M denotes the sequence length.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' We add a [CLS] token to the start of prompt sequence, and then feed into the prompt embedding model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' After that, we take the representation of [CLS] token as the style representation of this sentence.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' In order to stably control the style in the output of TTS through natural language description, the quality of prompt embedding is of great importance, which should satisfy two conditions: (1) the learned style prompt space must be able to contain important semantic information;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' (2) the distribution of prompt embedding space should be relatively uniform and smooth, and the model can be generalized to the style description not seen in the training.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' To realize this target, we propose a novel three-stage training-fine-tuning strategy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' The details are shown as follow.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' 1) Training a base language model for Chinese: Given that most of open-source pre-trained language models are trained on English data, we first train a RoBERTa model on Chinese data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' 2) Fine-tuning the pre-trained language model on labeled data: We use a small amount of Chinese natural language inference (NLI) to fine-tune the model parameters in a super- vised way to achieve a better semantic representation of the model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Specifically, we follow the training strategy proposed in SimCSE [51], which using an InfoNCE loss [52] objective to fine-tune our pre-trained RoBERTa model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' 3) Cross-modal representation learning between style prompts and speech: We hope that the prompt embedding vec- tor from the style prompt sentence and the style representation vector from the speech can be mapped to the shared semantic space, so that we can control the style in the TTS output through the style description when testing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Thus, we propose a cross-modal representation learning process based on metric learning, as Figure III shows.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Specifically, we build a audio- text retrieval task based on the style-prompt and audio pair in our NLSpeech dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' For any style prompt, we randomly choose N − 1 negative audio samples, combined with one positive audio sample to build a training batch.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Similarly, for one audio sample, we can also build a training batch that including one positive style prompt and N − 1 negative style prompts.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Inspired by previous audio-text retrieval works [16], [53], we adopt contrastive ranking loss [54] and InfoNCE loss [52] as the training objective respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Experiments results show that InfoNCE loss brings better retrieval performance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' The details will be introduced in Experiments part.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Modelling Mel-spectrograms in Discrete Latent Space In this part, we introduce our hypothesis: modelling mel- spectrograms in discrete latent space is a suitable way for expressive TTS.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Then we introduce how to utilize VQ-VAE as intermediate representations to help model mel-spectrogram.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Lastly, we introduce our proposed non-autoregressive mel- spectrogram token generation model, which is based on dis- crete diffusion models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Mel-VQ-VAE Decoder Mel-spectrogram Codebook Z2 Z3 Z1 Z4 ZK Discriminator mel-spectrogram tokens VQ(.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=') Encoder 5 1 7 39 5 1 7 39 Neural audio codec Encoder VQ1(.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=') VQ2(.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=') 5 1 7 39 Residual 1 1 2 3 33 Residual 2 VQ8(.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=') 12 0 7 4 5 1 7 39 1 2 3 33 12 0 7 4 Decoder Residual 7 Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' The overall architecture of the VQ-VAE and Neural audio codec models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Most of the text-to-speech (TTS) methods [2], [42], [55] directly learn the mapping from text to mel-spectrogram in continuous space.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Then they use a pre-trained vocoder to de- code the predicted mel-spectrogram into waveform.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' However, frequency bins in a mel-spectrogram are highly correlated along both time and frequency axes in a complicated way, especially when the speech sample conveys highly expres- sive emotions and speaking styles, leading to a challenging modeling problem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Furthermore, the gap between the ground- truth mel-spectrogram and the predicted one also influence the synthesis performance [56].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' In this study, we propose to model mel-spectrogram in discrete latent space, but still use a HiFi-GAN vocoder [56] to recover waveform from mel-spectrogram.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Specifically, we first pre-train a VQ-VAE with a large-scale speech dataset, so that the pre-trained Mel- VQ-VAE encodes all of the linguistic, pitch, energy, emotion information into the latent codes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Then we regard the vector quantized latent codes as the predicting targets and hence model the mel-spectrogram in the discrete latent space.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' A similar idea modeling mel-spectrogram in discrete latent space is applied in VQ-TTS [57], which utilizes self-supervised VQ acoustic feature (vq-wav2vec [21]) rather than traditional mel- spectrogram as intermediate prediction target.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' VQ-TTS builds an autoregressive classification model for prosody label and VQ acoustic feature.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Different from VQ-TTS, we still use mel-spectrogram as intermediate acoustic feature and use a VQ-VAE model to transform mel-sepctrogram into a latent discrete space for reducing the time-frequency correlations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' As Figure 3 shows, a mel-spectrogram can be represented by a group of mel-spectrogram tokens.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Thus, the mel-spectrogram synthesis problem transfers to predicting a group of discrete tokens, which can be seen as a language modeling problem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' In the following, we will introduce the details of VQ-VAE, then we introduce our proposed Mel-VQ-Diffusion decoder.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' 1) VQ-VAE: VQ-VAE is trained to approximate an input using a compressed intermediate representation, retrieved from a discrete codebook.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' VQ-VAE consists of three parts: an encoder Evq, a decoder G and a codebook Z = {zk}K k=1 ∈ JOURNAL OF LATEX CLASS FILES, VOL.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' 14, NO.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' 8, AUGUST 2021 6 RK×nz containing a finite number of embedding vectors, where K denotes the size of the codebook and nz is the dimension of codes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Given a mel-spectrogram s ∈ RF ×L, the input s is firstly encoded into a lower-dimension representation ˆz = Evq(s) ∈ RF ′×L′×nz where F ′ × L′ represents the reduced frequency and time dimension .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Then a spatial-wise quantizer Q(.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=') is used to map each spatial feature ˆzij into its closest codebook entry zk to obtain a spatial collection of spectrogram tokens zq zq = Q(ˆz) := � arg min zk∈Z ||ˆzij −zk||2 2 for all (i, j) in (F ′, L′) � (7) Lastly, the mel-spectrogram can be faithfully reconstructed via the decoder, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=', ˆs = G(zq).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' In this study, we follow VQGAN [20], which adds an adversarial loss [58] to improve the reconstruction performance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' 2) Mel-VQ-Diffusion decoder: With help of the pre-trained Mel-VQ-VAE, we transfer the problem of mel-spectrogram prediction into that of predicting a group of quantization tokens.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' To generate high-quality mel-spectrogram tokens while maintaining fast inference speed, we propose a Mel-VQ- Diffusion decoder.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' In the following, we first introduce the basic idea of Mel-VQ-Diffusion, then we summarize the training target.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Lastly, we introduce a classifier-free guidance to enhance the connection between conditional information and training target.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Given the paired training data (x0, y), where y denotes the combination of phone features, style features and speaker features, x0 denotes the ground truth mel-spectrogram tokens.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' We first build a diffusion process, which corrupts the distribu- tion of p(x0) into a controllable stationary distribution p(xT ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Then we build a Transformer-based neural network [59] to learn to recover the p(x0) conditioned on the y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Inspired by previous works [26], [60], we utilize a mask and uniform transition matrix to guide the diffusion process.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' The transition matrices Qt ∈ R(K+1)×(K+1) is defined as Qt = � ���� αt + βt βt βt · · 0 βt αt + βt βt · · 0 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' γt γt γt · · 1 � ���� .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' (8) The transition matrix denotes that each token has a probability of γt to transition to [MASK] token, a probability of Kβt be resampled uniformly over all the K categories and a probability of αt = 1 − Kβt − γt to stay the same token.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Based on the transition matrix, we can derive the stationary distribution p(xT ) as p(xT ) = [βT , βT , · · · , γT ], (9) where αT = �T t=1 αt, γT = 1 − �T t=1(1 − γt) and βT = (1 − αT − γT )/K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' We can calculate q(xt|x0) according to following formula: Qtc(x0) = αtc(x0) + (γt − βt)c(K + 1) + βt.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' (10) Decoder Training Target We train a network pθ(xt−1|xt, y) to estimate the posterior transition distribution q(xt−1|xt, x0).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' The network is trained to minimize the variational lower bound (VLB).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Ldiff = T −1 � t=1 � DKL[q(xt−1|xt, x0)||pθ(xt−1|xt, y)] � + DKL(q(xT |x0)||p(xT )), (11) Enhancing the connection between x0 and y Based on previous discussion, we can see that the conditional informa- tion y inject into the network, to help optimize p(xt−1|xt, y).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' However, in the last few steps, when xt includes enough infor- mation, the network may ignore the conditional information y in the training stage.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' To solve this problem, we introduce the classifier free guidance [61], [62] to enhance the connection between x0 and y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Specifically, instead of only optimizing p(x|y), we expect to optimize the following target function: log(p(x|y)) + λ log(p(y|x)), (12) where λ is a hyper-parameter to control the degree of poste- rior constraint.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Using Bayes’s theorem, Formula (12) can be derived as: arg max x [log p(x|y) + λ log p(y|x)] = arg max x [(λ + 1) log p(x|y) − λ log p(x)] = arg max x [log p(x) + (λ + 1)(log p(x|y) − log p(x))].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' (13) To predict the unconditional mel-spectrogram token, we follow [62] to use a learnable null vector n to represent unconditional information y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' In the training stage, we set 10% probability to use null vector n.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' In the inference stage, we first generate the conditional mel-spectrogram token’s logits pθ(xt−1|xt, y), then predict the unconditional mel-spectrogram token’s logits pθ(xt−1|xt, n).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Based on formula (13), the next step sample probability pθ(xt−1|xt, y) can be re-write as: pθ(xt−1|xt, n) + (λ + 1)(pθ(xt−1|xt, y) − pθ(xt−1|xt, n)) (14) E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Modelling Waveform in Discrete Latent Space Via Residual Vector Quantizer Inspired by the success of neural audio codec models, such as Soundstream [24] and Encodec [23].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' In this study, we additionally investigate directly predicting waveform in the discrete latent space with the help of large-scale pre-trained neural audio codec models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Recently, many methods have been proposed to generate speech using neural codec model, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' AudioLM [11] trains speech-to-speech language models on both k-means tokens from a self-supervised model and acoustic tokens from a neural codec model, leading to high-quality speech-to-speech generation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' The concurrent work VALL-E [12] is the most related to ours, VALL-E proposes to train a two-stage models to synthesize speech based on text input and reference audio.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' However, VALL-E needs a two-stage training strategy, and the first stage is an autoregressive language model, which significant influence the synthesis speed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' In this study, we pro- pose a non-autoregressive model based on discrete diffusion JOURNAL OF LATEX CLASS FILES, VOL.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' 14, NO.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' 8, AUGUST 2021 7 Token Embedding Encoder 5 1 7 39 1 2 3 33 12 0 7 4 1 M M M M M 3 M M M M 0 Diffusion process Wave-VQ- Diffusion Transformer Mutil-head output layer 5 1 7 39 1 2 3 33 12 0 7 4 Decoder Convolution layer Downsample Copy and Add Upsample L denotes the number of tokens generated by one codebook, Nq denotes the number of codebook.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' The framework of our proposed U-transformer-based discrete diffusion decoder.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' model, which significantly improve the synthesis speed while maintaining high-quality synthesis performance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' As Figure 3 shows, compared to Mel-VQ-VAE, the neural audio codec model includes more codebooks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Although using more codebooks can bring better reconstruction performance, it also raise a new research problem: How to model such long sequence by transformer?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' As we known, the computational complexity of transformer is related to the sequence length.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' For a 10s speech with 24k sampling rate, if we use 8 codebooks and set 240 times downsampling in the encoder, we will get 8000 tokens.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Using a transformer based model to handle such long sequence is challenging due to GPU memory limitation, thus it is necessary to seek novel strategy for long sequence modelling.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' In this study, we propose a U-Transformer architecture to simultaneously model multiple codebooks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' As Figure 4 shows, we first use several convolution layers to downsample the input codebook matrix along the codebook number dimension, after the convolution layers, we use a transformer to model the relationship of tokens in latent space.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' After that, we use several convolution layers and upsampling layers to recover the codebook number dimension.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Lastly, we use different output layers to output prediction results for each codebook simultaneously.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' 1) Wave-VQ-Diffusion: There are three differences in Wave-VQ-Diffusion comparing to Mel-VQ-Diffusion: (1) We adopt a U-transformer architecture to model multiple code- books simultaneously.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Note that we use the same transformer architecture as that in Mel-VQ-Diffusion.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' (2) We use different embedding table for different codebook, due to the fact that tokens from different codebooks follows different data distri- butions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' (3) We design an improved mask and uniform strategy for the diffusion process, which is based on a principle that the information included in the codebook is gradually decrease from codebook 1 to Nq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' The first codebook includes the most of text, style, speaker identity information, the following codebooks mainly include the fine-grained acoustic details, which is crucial for the speech’s quality.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' We conjecture that the first codebook’s tokens are easy to recover conditioned on y, instead the following codebook’s tokens are hard to recover due to the they have not obvious connection with y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Following the easy-first-generation principle, we should mask the last codebook (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' codebook Nq) at the start of the forward process and mask the foremost codebook (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' codebook 1) at the end of the forward process such that the learnable reverse process follows an easy-first generative behavior.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' However, previous commonly-used mask and uniform strategy assumes all of the token in the sequence are of the same importance, which violates the easy-first-generation principle.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' To solve this problem, we propose an improved mask and uniform strategy, whose details are presented in the following.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Improved Mask and uniform strategy We dynamically al- locate different weights for different codebooks when we pre- define the transition matrix.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Considering these aforementioned properties, we construct αi t , γi t and β i t as follows αi t = 1 − t T − exp( i%Nq 2∗Nq ) 2 ∗ T , γi t = t T + exp( i%Nq 2∗Nq ) 2 ∗ T , β i t = (1 − αi t − γi t)/K, (15) where Nq denotes the the number of codebooks in neural audio codec model, i denotes the token position in the sequence.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' In our study, we concatenate all of the tokens from codebook 1 to codebook Nq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' The Training and Inference Details In this section, we summarize the overall training objective and the inference process.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' 1) Training objective: Our proposed InstructTTS can be trained in an end-to-end manner.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' The overall training objective is as follows: L = Ldiff + Lvar + λ1I(ze;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' c) + λ2I(ze;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' zsid)+ λ3DEuc(zc, ze) − β1F1(θ1) − β2F2(θ2).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' (16) where Ldiff denotes the diffusion loss, Lvar denotes the dura- tion, pitch and energy reconstruction loss.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' I(.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=') denotes mutual information, DEuc denotes the L2 loss.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' F1(θ1) and F2(θ2) denote the likelihood approximation model of qθ1(zsid|ze) JOURNAL OF LATEX CLASS FILES, VOL.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' 14, NO.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' 8, AUGUST 2021 8 Algorithm 1 Training of the InstructTTS.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Require: Pre-trained prompt encoder, A transition matrix Qt, timestep T, network parameters θ, training epoch N, NLSpeech dataset D, the encoder of VQ-VAE Evq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' 1: for i = 1 to N do 2: for (conetent prompt, style prompt, audio) in D do 3: mel = get mel spectrogram(audio);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' 4: x0 = Evq(mel);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' 5: c =ContentEncoder(content prompt);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' 6: ze =AudioEncoder(mel);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' 7: zp =PromptEmb(style prompt);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' 8: zs =SpeakerEmb(speaker id);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' 9: y = c + ze + zs;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' 10: sample t from Uniform(1, 2, 3, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=', T);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' 11: sample xt from q(xt|x0) based on formula (10);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' 12: estimate pθ(xt−1|xt, y);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' 13: calculate loss according to formula (16);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' 14: update network θ;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' 15: end for 16: end for 17: return network θ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Algorithm 2 Inference of the InstructTTS.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Require: Time stride ∆t, timestep T, Content Prompt, Style Prompt, the decoder of VQ-VAE G, network θ, stationary distribution p(xT );' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' 1: t = T, c =ContentEncoder(content prompt);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' 2: zs =SpeakerEmb(speaker id);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' 3: zp =PromptEmb(style prompt);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' 4: y = c + zp + zs;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' 5: sample xt from p(xT );' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' 6: while t>0 do 7: sample xt based on formula (14) 8: t ← (t − ∆t) 9: end while 10: return G(xt).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' and qθ2(ze|c) respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Details about the MI estimation and minimization can be found in [49].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' The whole training process is summarized on Algorithm 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Note that we assume a Mel-VQ-Diffusion decoder is used in Algorithm 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' When we use a Wave-VQ-diffusion decoder, a similar process is used.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' 2) Inference: In the inference process, we directly use the feature extracted by style prompt embedding model as the style features.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' In our experiments, we set the T = 100 and ∆t = 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' The whole inference process is summarized on Algorithm 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' EXPERIMENTAL SETUP A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Dataset and Data Pre-processing 1) Dataset for Vector Quantization Pre-training: To obtain a robust and acoustic-informative Vector Quantization model, we combine one internal dataset with three commonly-used public-available TTS datasets: (1) Our internal dataset, which is a Mandarin Chinese speech corpus, including 300h speech data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' (2) The VCTK dataset 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' (3) The AISHELL3 dataset [?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' ].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' (4) The LibriTTS clean dataset [63].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' In total, the training set has 669 hours speech data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' 2) Dataset for InstructTTS: We use our internal dataset NLSpeech as our training and testing dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' The details can refer to Section III.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' 3) Data pre-processing: All audio clips have a sampling rate of 24kHz.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' For Mel-VQ-VAE pre-training, the log mel- spectrograms extracted using a 1024-points Hanning window with 240-points hop size and 80 mel bins.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' The PyWorld toolkit 3 is used to compute F0 values from speech signals.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Energy features are computed by taking the l2-norm of frequency bins in STFT magnitudes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Implementation Details We first pre-train the Mel-VQ-VAE and neural audio codec models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Then we fix the pre-trained model, and train the InstructTTS model in an end-to-end manner.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' In the following, we will introduce the details of network structure and training strategy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' 1) VQ-VAE: In this study, we follow VQ-GAN [20], [25], adopting similar network architecture for the VQ-VAE encoder Evq, decoder G, and discriminator D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' To preserve more time- dimension information, we set a downsampling factor of 2 along the time axis, and a downsampling factor of 20 along the frequency axis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' For the codebook Z, the dimension of each code word vector nz is set as 256, and the codebook dictionary size K is set as 512.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' The learning rate is fixed and determined as a product of a base learning rate, the number of GPUs used and the batch size.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' In our experiments, the base learning rate is set as 1 × E−6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' The Adam optimizer [64] (the betas are 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content='5 and 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content='9) is adopted to optimize weights.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' We train VQ- VAE with batches of 24 mel-spectrograms on 8 Nvidia V100 GPUs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' The training takes about 3 days on the our dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' 2) Neural Audio Codec Model: Inspired by the success of Encodec [23] and SoundStream [24], we adopt similar network architecture with the Encodec model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Specifically, the encoder model E consists with a 1D convolution layer with 32 hidden channels and a kernel size of 7, followed by 4 convolution blocks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Each convolution block is composed of a single residual unit followed by a down-sampling layer consisting with a strided convolution layer, with a kernel size twice of the stride.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' The residual unit contains two convolution layers and a skip connection.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' The number of channels is doubled whenever down-sampling occurs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' The convolution blocks are followed by a two-layer LSTM for sequence modeling and a final 1D convolution layer with a kernel size of 7 and 32 output channels.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' In our study, we set the strides as S = [6, 5, 4, 2].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' We set the maximum codebook size as 12 in the training process.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Similar to SoundStream [24], quantizer dropout is used.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' To maintain high-quality audio reconstruction performance, a multi-scale STFT-based (MS-STFT) discriminator is used.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' We train the neural audio codec model with batches of 32 audio segments (we randomly crop 1 second segment from a audio sample) on 32 Nvidia V100 GPUs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' The training takes about 3 days for 300 epochs in our dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' 2https://datashare.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content='ed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content='ac.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content='uk/handle/10283/2651 3https://github.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content='com/JeremyCCHsu/Python-Wrapper-for-World-Vocoder JOURNAL OF LATEX CLASS FILES, VOL.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' 14, NO.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' 8, AUGUST 2021 9 3) InstructTTS: Our proposed InstructTTS consists of three main parts: style encoder, content encoder and discrete diffu- sion decoder.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' For the content encoder, we follow the Fast- Speech2 [2], using the same architecture for the phoneme encoder and the variance adaptor.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' For the style encoder, which includes a pre-trained Prompt encoder model (the details refer to Section IV-C) and an audio encoder.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' The audio encoder consists of two convolution layers and one multi-head attention module.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' For the discrete diffusion model, we follow the similar architecture as [26], we built a 12-layer 8- head transformer with a dimension of 256 for the decoder.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Each transformer block contains a full-context attention, a linear fusion layer to combine conditional features and a feed-forward network block.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' For the default setting, we set timesteps T = 100.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' We adopt the linear schedule strategy, which linearly increase γt and βt from 0 to 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content='9 and 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content='1, and decrease αt from 1 to 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' We optimize our network using the AdamW optimizer [65] with β1 = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content='9 and β2 = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content='94.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' The basic learning rate is 3 × E−6, and batch size is 16 for each GPU.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Baseline Approach In the literature, there is no existing expressive TTS model using natural language style prompt to control stylish gener- ation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Following traditional neural TTS paradigm which pre- dicts intermediate acoustic features, such mel-spectrograms, from text input, we adapt the StyleSpeech model proposed in [14] as the baseline approach.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' We replace the Mel-Style- Encoder in the StyleSpeech model with the same style encoder module used in InstructTTS, making the comparison as fair as possible.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' The baseline model uses the same HiFi-GAN vocoder to generate waveform as the proposed model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' VI.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' EVALUATION METRIC A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Objective Evaluation We evaluate the synthesized speech from two aspects: speech quality and prosody similarity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' For speech quality, we adopt Mel-cepstral distorion (MCD) [66], structural similarity index measure (SSIM) [67] and Short-Time Objective Intelli- gibility (STOI) [68] to evaluate the speech quality.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' For prosody similarity, we use three pitch-related metrics: Gross Pitch Error (GPE), Voicing Decision Error (VDE) [69] and F0 Frame Error (FFE) [70].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' GPE, VDE and FFE are widely applied to evaluate the performance of expressive TTS.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' The details of these metrics will be introduced as follows.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' 1) Mel-cepstral distorion: Spectral features, based on the short-term power spectrum of sound, such as Mel-cepstral coefficients (MCEPs), contain rich information about expres- sivity and emotion [71].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Mel-cepstral Distortion (MCD) [66] is a widely adopted metric to measure the spectrum similarity, which is computed as MCD = 1 T T −1 � t=0 � � � � M � m=1 (cm,t − ˆcm,t)2, (17) where cm,t and ˆcm,t denote the m-th mel-frequency cepstral coefficient (MFCC) of the t-th frame from the reference and synthesized speech.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' We sum the squared differences over the first M MFCCs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' In this study, we set M = 24.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' 2) SSIM and STOI: Structural similarity index measure (SSIM) [67] and Short-Time Objective Intelligibility (STOI) [68] are effective metrics to evaluate the speech clarity and intelligibility.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Following previous work [72], we also adopt them as one of the metrics for speech quality.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' 3) Prosody-related metrics: Given that pitch is considered as a major prosodic factor contributing to speech emotion and closely correlated to the activity level [73],' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' [74],' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' in this study,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' we adopt three common pitch similarity metrics to evaluate the synthesis results,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' which are Gross Pitch Error (GPE),' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Voicing Decision Error (VDE) and F0 Frame Error (FFE) [70],' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' with detailed descriptions as follows: 1) Gross Pitch Error (GPE): measures the pitch similarity between a pair of compared utterances.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' 2) Voice Decision Error (VDE): measures the difference of voiced/unvoiced decision between a pair of compared utterances.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' 3) F0 Frame Error (FFE): reflects both pitch similarity and voiced/unvoiced decision differences between a pair of compared utterances.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' The metrics of GPE, VDE and FFE have been used as common objective evaluation metric for expressive TTS [7].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Subjective Evaluation To further validate the effectiveness of our proposed method, we conduct subjective evaluation from two aspects: speech quality and style relevance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' 1) Speech quality: We first conduct the Mean Opinion Score (MOS) test to evaluate speech quality, which aims to evaluate the speech’s naturalness, fidelity and intelligibility.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' All participants are asked to listen to the reference speech (“Ground truth”) and the synthesized speech and score the “quality” of each speech sample on a 5-point scale (‘5’ for excellent, ‘4’ for good, ‘3’ for fair, ‘2’ for poor, and ‘1’ for bad).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Each audio sample is rated by at least 20 testers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' 2) The style’s relevance between synthesized speech and the natural language prompt: We conduct RMOS (relevance mean opinion score) for speaking style relevance on the testing set to evaluate the relevance between synthesized speech and the prompt.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' All participants are asked to read the natural language prompt and then listen to the synthesized speech.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' After that, the participants are asked to score the “relevance” of each speech sample on a 5-point scale (‘5’ for excellent, ‘4’ for good, ‘3’ for fair, ‘2’ for poor, and ‘1’ for bad).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Each audio sample is rated by at least 20 testers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' 3) AXY test: We propose to use AXY test [7] to assess the style relevance between the generated speech with its corresponding natural language style prompt.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' An AXY test aims to assess the style transfer performance, where raters are asked to rate a 7-point score (from -3 to 3) and choose the speech samples which sound closer to the target style in terms of style expression.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' For each reference (A), the listeners are asked to choose a preferred one among the samples synthesized by the baseline model (X) and proposed Method (Y), from which AXY preference rates are calculated.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' The scale ranges of 7-point are from “X is much closer” to “Both are about the same distance” to “Y is much closer”, and can JOURNAL OF LATEX CLASS FILES, VOL.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' 14, NO.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' 8, AUGUST 2021 10 TABLE II OBJECTIVE AND SUBJECTIVE EVALUATION AS WELL AS MODEL SIZE RESULTS.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' MCD, SSIM, STOI, GPE, VDE AND FFE ARE ADOPTED AS OBJECTIVE METRICS.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' GT DENOTES THE GROUND TRUTH SPEECH, GT (VOC) DENOTES THAT WE USE PRE-TRAINED VOCODER (HIFI-GAN) RECOVER SPEECH FROM MEL-SPECTROGRAM.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' MOS ANS RMOS AS THE SUBJECTIVE METRIC, IS PRESENTED WITH 95% CONFIDENCE INTERVALS.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Model Decoder MCD(↓) SSIM(↑) STOI (↑) GPE(↓) VDE(↓) FFE(↓) MOS(↑) RMOS(↑) GT 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content='62 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content='05 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content='65 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content='05 GT (voc) 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content='02 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content='695 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content='893 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content='006 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content='076 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content='08 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content='41 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content='07 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content='61 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content='07 Baseline Mel-decoder 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content='75 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content='385 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content='613 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content='476 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content='347 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content='42 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content='04 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content='08 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content='85 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content='1 InstructTTS Mel-VQ-Diff 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content='69 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content='387 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content='607 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content='479 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content='343 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content='40 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content='35 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content='07 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content='22 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content='09 Wave-VQ-Diff 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content='77 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content='365 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content='587 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content='433 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content='343 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content='39 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content='59 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content='08 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content='27 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content='07 TABLE III THE AXY PREFERENCE TEST RESULTS FOR SPEAKING STYLE RELEVANCE.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' X Y 7-point score Baseline InstructTTS (Mel) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content='72 InstructTTS (Wave) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content='84 TABLE IV THE EMOTION CLASSIFICATION PROBABILITY (%) COMPARISON BETWEEN OUR PROPOSED METHODS AND THE BASELINE.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' FOR EACH TYPE OF EMOTION, WE CHOOSE 15 SAMPLES.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' THE TABLE REPORTS THE AVERAGED PROBABILITY VALUES OF 15 UTTERANCES.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Model Sad Happy Angry Overall GT 100 88.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content='80 94.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content='70 95.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content='20 Baseline 64.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content='28 66.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content='60 68.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content='15 66.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content='70 InstructTTS (Mel) 71.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content='42 66.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content='60 68.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content='40 69.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content='10 InstructTTS (Wave) 71.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content='42 55.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content='50 84.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content='21 71.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content='42 naturally be mapped on the integers from -3 to 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Note that we do not use the ground truth speech as reference, instead we ask raters to read the natural language style prompt, and then evaluate which synthesized speech is closer to the prompt in terms of semantic meaning in emotion and style.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Emotion Perception Test Given that speaking style is related with emotion.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' We choose three types of test samples (happy, sad and angry) from our test set based on the natural language prompt.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Then we expect our proposed methods can generate similar emotional speech with the guidance of natural language prompt.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' We propose to use emotion classification probability to validate the emotion perception performance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Intuitively, the classification probabilities summarize the useful emotion information from the previous layers for final output layer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Thus, we believe that the classification probabilities can be an effective tool to justify the synthesized speech’s performance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' To realize this, we first pre-train an emotion classification model in our internal emotion classification dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' We adopt a pre-trained wav2vec2 [75] model as feature extractor, and then we add two linear layers and one softmax layer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' VII.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' RESULTS AND ANALYSIS In this section, we conduct experiments to verify the ef- fectiveness of our proposed InstructTTS.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' We first compare the performance between our proposed InstructTTS and the baseline.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Then we conduct ablation studies to validate the effectiveness of each part of our proposed methods.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' TABLE V THE ABLATION STUDY FOR CROSS-MODAL REPRESENTATION LEARNING.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' WE EVALUATED WITH THE TEST SET OF CHINESE STS-B CORPUS.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' SCC DENOTES SPEARMAN CORRELATION COEFFICIENT.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Model SCC (%) w/o cross-modal learning 80.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content='4 w cross-modal learning 80.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content='94 A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' The comparison between proposed InstructTTS and Base- line 1) The analysis of objective metrics: Table II shows the objective metrics (MCD, SSIM, STOI, GPE, VDE, FFE) com- parison between our proposed InstructTTS and the baseline system.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' We have the following observations: (1) Our proposed InstructTTS achieves better performance than the baseline system in terms of speech quality and prosody.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' (2) Using Mel- VQ-Diffusion as decoder can realize better speech quality than Wave-VQ-Diffusion, but Wave-VQ-Diffusion is superior in maintaining prosody details.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' One of the reasons is that the pre- trained Mel-VQ-VAE downsamples 20 times in the frequency dimension, which may harm the pitch information.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Instead, Wave-VQ-Diffusion directly models all of the information in time domain, prosody-related information can be well reserved, but some acoustic details may loss.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' 2) Subjective Evaluation: We conduct crowd-sourced mean opinion score (MOS) tests to evaluate the quality of the synthesized speech perceptually.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Furthermore, we also conduct crowd-sourced relevance mean opinion score (RMOS) tests to evaluate the relevance between the synthesized speech and the prompt.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' The results are shown on Table II.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' We can see that InstructTTS (mel) gets the best MOS performance, and InstructTTS (wave) gets the best RMOS performance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' We can see that both of our proposed InstructTTS obtain better RMOS performance than the baseline.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' The subjective evaluation re- sults are consist with the objective evaluation results.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' We can also observe that the speech quality of InstructTTS (wave) still has room for improvement in quality, on which we will further study in our future work.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' We additionally conduct AXY preference test to compare InstructTTS and the baseline in terms of the naturalness of prosody in their generated speech.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' From Table III, we can see that the raters show much higher preference to the proposed InstructTTS (Mel) and InstructTTS (Wave) than to the baseline model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' 3) Emotion Perception Evaluation: To further evaluate the expressiveness in modeling speaking emotion and styles with InstructTTS, we conduct perception evaluation with a speech JOURNAL OF LATEX CLASS FILES, VOL.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' 14, NO.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' 8, AUGUST 2021 11 TABLE VI THE TEXT-TO-AUDIO RETRIEVAL PERFORMANCE IN THE TEST SET.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' WE USE RECALL AT RANK K (R@K) AS THE METRICS.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Loss Type R@1 R@5 R@10 Contrastive Loss 11.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content='62 42.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content='97 61.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content='72 InfoNCE 15.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content='62 42.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content='97 63.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content='67 TABLE VII THE ABLATION STUDY FOR MUTUAL INFORMATION MINIMIZATION (MIM) TRAINING STRATEGY.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Model MIM MCD(↓) SSIM(↑) FFE(↓) InstructTTS (Mel) 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content='94 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content='368 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content='44 ✓ 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content='69 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content='387 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content='40 InstructTTS (Wave) 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content='91 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content='355 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content='43 ✓ 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content='77 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content='365 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content='39 emotion classification model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' The details are introduced in Section VI-C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' The results are reported in Table IV.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' We can see that our pre-trained speech emotion classification (SEC) model obtains a good classification performance in the ground truth set, which proves that our SEC model is effective.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Furthermore, we can observe that our proposed InstructTTS got better classification performance, with the InstructTTS (Wave) model getting the best performance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' We note that the evaluation results are consistent with the FFE results.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Ablation studies for InstructTTS 1) The impact of cross-modal representation learning for robust style embedding: In this section, we explore the effec- tiveness of our proposed cross-modal representation learning in Section IV-C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Table V presents the results.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' We can see that, after finetuning with our proposed cross-modal repre- sentation learning, the performance of the RoBERTa even achieves better performance in STS task than only finetuning with SimCSE.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Furthermore, we also evaluate the text-to-audio retrieval performance in the test set.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' As Table VI shows, we can see that using InfoNCE loss as training objective can bring better retrieval performance than the contrastive ranking loss.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' 2) The Impact of Mutual information minimization (MIM) training: In this study, we propose to using mutual information minimization strategy to constrain the encoded information by the audio encoder, we expect the audio encoder only encodes the style-related information.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' In this part, we conduct ablation studies to investigate whether our proposed MIM strategy can bring better performance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' The experiments results report on Table VII.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' We can see that using MIM training strategy can signi���cant improvement in both speech quality and pitch similarity, which proved that effectiveness of feature disentangled strategy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' 3) The effectiveness of classifier-free guidance: In this study, we propose to use classifier-free guidance (CFG) strat- egy to enhance the connection between conditional informa- tion and the predicted results.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' To validate the effectiveness of classifier-free guidance, we conduct ablation study, the exper- iments are shown on Table VIII.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' We can see that using CFG strategy can bring better performance due to it enhancing the connection between conditional information and the predicted TABLE VIII THE ABLATION STUDY FOR THE EFFECTIVENESS OF CLASSIFIER-FREE GUIDANCE (CFG).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Model CFG MCD(↓) SSIM(↑) FFE(↓) InstructTTS (Mel) 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content='75 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content='36 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content='413 ✓ 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content='69 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content='387 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content='40 InstructTTS (Wav) 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content='75 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content='353 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content='41 ✓ 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content='77 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content='365 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content='39 Frame F0 (Hz) Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Pitch tracks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' We present the F0 contours of 10 different runs with the same text input, speaker id and style prompt conditioning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' results, which forces the model to better utilize the conditional information.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' 4) The effectiveness of improved diffusion strategy for Wave-VQ-Diffusion decoder: Table IX shows the experiments results when we use different diffusion strategies.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' We can see that our proposed improved mask and uniform strategy can bring better performance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' The experimental results validate our proposed easy-first-generation principle.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' 5) Discuss how many codebooks (Nq) we should use when we train InstructTTS (Wave): As we discuss in Section V-B, we train a neural audio codec model using 12 codebooks in total.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' In practice, we do not need to use all of 12 codebooks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Although using more codebooks can bring better speech quality, it also bring burden for the network.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' To choose a suitable Nq, we follow two principles: (1) make sure using Nq codebooks can get satisfactory reconstruction performance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' (2) Nq should be as small as possible.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' We use an out-of-domain test set (including 1024 high-quality 24kHz audio samples) to evaluate the reconstruction performance of our neural audio codec model and the pre-trained Encodec model 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Table X shows the experimental results.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' We can see that when using 8 codebooks, we can get a comparable reconstruction performance with Encodec.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Thus, we set Nq = 8 when we train InstructTTS (Wave) in this study.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' We also explore using Nq = 12 but receive not obvious performance boost.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' We conjecture that the amount of data in the NLSpeech dataset is still in-sufficient and using a larger-scale dataset can bring extra improvement.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Synthesis Variation Unlike the baseline systhem, which output is uniquely determined by the input text and other conditional informa- tion (such as speaker identity, natural language prompt) at inference, InstructTTS takes sampling processes at denoising steps and can inject some variations into the generated speech.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' To demonstrate this, we run a InstructTTS (mel) model 10 times for a particular input text, speaker and natural language 4https://github.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content='com/facebookresearch/encodec 400 350 300 250 !' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' 200 150 100 50 20 40 60 80 100 120 140 160JOURNAL OF LATEX CLASS FILES, VOL.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' 14, NO.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' 8, AUGUST 2021 12 TABLE IX THE ABLATION STUDY FOR DIFFERENT DIFFUSION STRATEGY.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' MAR REPRESENTS THE MASK AND REPLACE STRATEGY.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' I-MAR DENOTES OUR IMPROVED MASK AND REPLACE STRATEGY.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' NOTE THAT WE CONDUCT EXPERIMENTS ON WAVE-VQ-DIFFUSION BASED INSTRUCTTTS IN THIS PART.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Model MCD SSIM STOI FFE InstructTTS (MAR) 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content='85 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content='354 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content='565 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content='42 InstructTTS (I-MAR) 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content='77 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content='365 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content='587 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content='39 TABLE X THE NEURAL AUDIO CODEC’S RECONSTRUCTION PERFORMANCE COMPARISON.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Nq DENOTES WE USE Nq CODEBOOKS TO RECONSTRUCTION THE AUDIO.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Model Nq PESQ STOI Neural Audio Codec Model (ours) 1 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content='974 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content='802 2 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content='632 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content='869 4 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content='240 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content='910 8 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content='54 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content='932 12 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content='62 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content='937 Encodec 24 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content='206 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content='965 prompt, and then compute the F0 contours of the generated speech samples.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' We visualize in Figure 5 and observe that InstructTTS can synthesize speech with diverse pitches.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' VIII.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' CONCLUSION In this work, we present InstructTTS, which can synthesize expressive speech with the natural language prompt.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' To our best of knowledge, this is the first work to use long and complex natural language prompt to control the speaking style.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' In terms of acoustic model, we propose a novel perspective to model expressive TTS: we propose to model expressive TTS in the discrete latent space and cast speech synthesis as a language modeling task.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' We explore two kinds of modelling methods: (1) modelling mel-spectrogram with the help of a pre-trained Mel-VQ-VAE model;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' (2) modeling waveform with the help of a pre-trained neural audio codec model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' In terms of model structure, we propose a novel U-transformer, which can effectively model long-sequence.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Our experiments demonstrate the advantages of our proposed method.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' This work still has some limitations that need to be ad- dressed in our future work: (1) The inference speed is limited due to the diffusion step is large (we use 100 diffusion steps).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' (2) We will build large-scale dataset to train the InstructTTS models, similar to VALL-E and AudioLM.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' We believe that InstructTTS is expected to be more robust when the amount of training data increases.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' ACKNOWLEDGMENTS We thank the help of our colleagues Mingjie Jin and Dan Su for this paper.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' They help us build the NLSpeech dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' REFERENCES [1] Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Wang, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Skerry-Ryan, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Stanton, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Wu, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Weiss, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Jaitly, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Yang, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Xiao, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Chen, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Bengio, et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=', “Tacotron: Towards end- to-end speech synthesis,” arXiv preprint arXiv:1703.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content='10135, 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' [2] Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Ren, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Hu, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Tan, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Qin, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Zhao, et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=', “Fastspeech 2: Fast and high-quality end-to-end text to speech,” arXiv preprint arXiv:2006.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content='04558, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' [3] J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Kim, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Kong, and J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Son, “Conditional variational autoencoder with adversarial learning for end-to-end text-to-speech,” in International Conference on Machine Learning, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' 5530–5540, PMLR, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' [4] N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Tits, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Wang, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Haddad, V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Pagel, and T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Dutoit, “Visualization and interpretation of latent spaces for controlling expressive speech synthesis through audio analysis,” arXiv preprint arXiv:1903.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content='11570, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' [5] N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Tits, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' El Haddad, and T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Dutoit, “Exploring transfer learning for low resource emotional tts,” in Proceedings of SAI Intelligent Systems Conference, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' 52–60, Springer, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' [6] Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Wang, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Stanton, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Zhang, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content='-S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Ryan, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Battenberg, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Shor, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Xiao, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Jia, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Ren, and R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Saurous, “Style tokens: Unsupervised style modeling, control and transfer in end-to-end speech synthesis,” in International Conference on Machine Learning, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' 5180–5189, PMLR, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' [7] R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Skerry-Ryan, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Battenberg, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Xiao, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Wang, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Stanton, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Shor, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Weiss, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Clark, and R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Saurous, “Towards end-to-end prosody transfer for expressive speech synthesis with tacotron,” in international conference on machine learning, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' 4693–4702, PMLR, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' [8] Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Jia, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Zhang, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Weiss, Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Wang, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Shen, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Ren, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Nguyen, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Pang, I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Lopez Moreno, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Wu, et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=', “Transfer learning from speaker verification to multispeaker text-to-speech synthesis,” Advances in neural information processing systems, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' 31, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' [9] D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Yang, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Liu, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Yu, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Wang, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Weng, and Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Zou, “Norespeech: Knowledge distillation based conditional diffusion model for noise- robust expressive tts,” arXiv preprint arXiv:2211.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content='02448, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' [10] T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Brown, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Mann, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Ryder, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Subbiah, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Kaplan, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Dhariwal, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Neelakantan, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Shyam, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Sastry, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Askell, et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=', “Language mod- els are few-shot learners,” Advances in Neural Information Processing Systems, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' 33, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' 1877–1901, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' [11] Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Borsos, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Marinier, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Vincent, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Kharitonov, O.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Pietquin, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Shar- ifi, O.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Teboul, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Grangier, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Tagliasacchi, and N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Zeghidour, “Audi- olm: a language modeling approach to audio generation,” arXiv preprint arXiv:2209.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content='03143, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' [12] C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Wang, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Chen, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Wu, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Zhang, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Zhou, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Liu, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Chen, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Liu, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Wang, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Li, et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=', “Neural codec language models are zero-shot text to speech synthesizers,” arXiv preprint arXiv:2301.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content='02111, 2023.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' [13] Andrea and so on, “Musiclm: Generating music from text,” arXiv preprint arXiv: 2301.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content='11325, 2023.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' [14] D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Min, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Lee, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Yang, and S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Hwang, “Meta-stylespeech: Multi- speaker adaptive text-to-speech generation,” in International Conference on Machine Learning, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' 7748–7759, PMLR, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' [15] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Radford, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Kim, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Hallacy, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Ramesh, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Goh, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Agarwal, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Sastry, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Askell, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Mishkin, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Clark, et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=', “Learning transferable visual models from natural language supervision,” in International Conference on Machine Learning, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' 8748–8763, PMLR, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' [16] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Koepke, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content='-M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Oncescu, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Henriques, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Akata, and S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Albanie, “Audio retrieval with natural language queries: A benchmark study,” IEEE Transactions on Multimedia, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' [17] W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Zhou, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Li, and Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Tian, “Recent advance in content-based image retrieval: A literature survey,” arXiv preprint arXiv:1706.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content='06064, 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' [18] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Van Den Oord, O.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Vinyals, et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=', “Neural discrete representation learning,” Advances in Neural Information Processing Systems, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' 30, 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' [19] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Razavi, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Van den Oord, and O.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Vinyals, “Generating diverse high-fidelity images with vq-vae-2,” Advances in Neural Information Processing Systems, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' 32, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' [20] P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Esser, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Rombach, and B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Ommer, “Taming transformers for high- resolution image synthesis,” in Proceedings of the IEEE/CVF Confer- ence on Computer Vision and Pattern Recognition, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' 12873–12883, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' [21] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Baevski, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Schneider, and M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Auli, “vq-wav2vec: Self- supervised learning of discrete speech representations,” arXiv preprint arXiv:1910.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content='05453, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' [22] W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content='-N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Hsu, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Bolte, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content='-H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Tsai, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Lakhotia, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Salakhutdinov, and A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Mohamed, “Hubert: Self-supervised speech representation learning by masked prediction of hidden units,” IEEE/ACM Transactions on Audio, Speech, and Language Processing, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' 29, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' 3451–3460, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' [23] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' D´efossez, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Copet, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Synnaeve, and Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Adi, “High fidelity neural audio compression,” arXiv preprint arXiv:2210.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content='13438, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' [24] N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Zeghidour, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Luebs, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Omran, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Skoglund, and M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Tagliasacchi, “Soundstream: An end-to-end neural audio codec,” IEEE/ACM Transac- tions on Audio, Speech, and Language Processing, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' 30, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' 495–507, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' [25] V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Iashin and E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Rahtu, “Taming visually guided sound generation,” in British Machine Vision Conference (BMVC), 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' JOURNAL OF LATEX CLASS FILES, VOL.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' 14, NO.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' 8, AUGUST 2021 13 [26] D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Yang, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Yu, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Wang, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Wang, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Weng, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Zou, and D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Yu, “Diffsound: Discrete diffusion model for text-to-sound generation,” arXiv preprint arXiv:2207.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content='09983, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' [27] I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Elias, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Zen, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Shen, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Zhang, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Jia, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Skerry-Ryan, and Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Wu, “Parallel tacotron 2: A non-autoregressive neural tts model with differ- entiable duration modeling,” arXiv preprint arXiv:2103.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content='14574, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' [28] J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Kim, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Kim, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Kong, and S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Yoon, “Glow-tts: A generative flow for text-to-speech via monotonic alignment search,” Advances in Neural Information Processing Systems, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' 33, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' 8067–8077, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' [29] X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Li, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Song, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Li, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Wu, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Jia, and H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Meng, “Towards multi- scale style control for expressive speech synthesis,” arXiv preprint arXiv:2104.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content='03521, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' [30] E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Casanova, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Shulby, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' G¨olge, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' M¨uller, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' de Oliveira, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Junior, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' d.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Soares, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Aluisio, and M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Ponti, “Sc-glowtts: an efficient zero-shot multi-speaker text-to-speech model,” arXiv preprint arXiv:2104.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content='05557, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' [31] K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Zhou, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Sisman, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Rana, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Schuller, and H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Li, “Speech synthe- sis with mixed emotions,” IEEE Transactions on Affective Computing, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' [32] R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Huang, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Ren, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Liu, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Cui, and Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Zhao, “Generspeech: Towards style transfer for generalizable out-of-domain text-to-speech synthesis,” arXiv preprint arXiv:2205.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content='07211, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' [33] S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Liu, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Yang, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Su, and D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Yu, “Referee: Towards reference-free cross-speaker style transfer with low-quality data for expressive speech synthesis,” in ICASSP 2022-2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' 6307–6311, IEEE, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' [34] M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Kim, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Cheon, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Choi, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Kim, and N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Kim, “Expressive text-to-speech using style tag,” arXiv preprint arXiv:2104.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content='00436, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' [35] Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Guo, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Leng, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Wu, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Zhao, and X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Tan, “Prompttts: Controllable text-to-speech with text descriptions,” arXiv preprint arXiv:2211.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content='12171, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' [36] J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Sohl-Dickstein, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Weiss, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Maheswaranathan, and S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Ganguli, “Deep unsupervised learning using nonequilibrium thermodynamics,” in International Conference on Machine Learning, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' 2256–2265, PMLR, 2015.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' [37] P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Dhariwal and A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Nichol, “Diffusion models beat gans on image synthesis,” Advances in Neural Information Processing Systems, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' 34, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' [38] S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Gu, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Chen, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Bao, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Wen, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Zhang, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Chen, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Yuan, and B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Guo, “Vector quantized diffusion model for text-to-image synthesis,” arXiv preprint arXiv:2111.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content='14822, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' [39] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Nichol, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Dhariwal, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Ramesh, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Shyam, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Mishkin, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' McGrew, I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Sutskever, and M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Chen, “Glide: Towards photorealistic image gen- eration and editing with text-guided diffusion models,” arXiv preprint arXiv:2112.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content='10741, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' [40] P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Esser, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Rombach, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Blattmann, and B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Ommer, “Imagebart: Bidi- rectional context with multinomial diffusion for autoregressive image synthesis,” Advances in Neural Information Processing Systems, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' 34, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' [41] Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Kong, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Ping, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Huang, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Zhao, and B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Catanzaro, “Dif- fwave: A versatile diffusion model for audio synthesis,” arXiv preprint arXiv:2009.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content='09761, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' [42] M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Jeong, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Kim, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Cheon, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Choi, and N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Kim, “Diff- tts: A denoising diffusion model for text-to-speech,” arXiv preprint arXiv:2104.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content='01409, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' [43] V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Popov, I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Vovk, V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Gogoryan, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Sadekova, and M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Kudinov, “Grad- tts: A diffusion probabilistic model for text-to-speech,” in International Conference on Machine Learning, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' 8599–8608, PMLR, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' [44] S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content='-g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Lee, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Kim, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Shin, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Tan, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Liu, Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Meng, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Qin, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Chen, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Yoon, and T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content='-Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Liu, “Priorgrad: Improving conditional denois- ing diffusion models with data-driven adaptive prior,” arXiv preprint arXiv:2106.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content='06406, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' [45] E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Hoogeboom, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Nielsen, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Jaini, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Forr´e, and M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Welling, “Argmax flows and multinomial diffusion: Towards non-autoregressive language models,” arXiv preprint, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' [46] J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Austin, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Johnson, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Ho, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Tarlow, and R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' van den Berg, “Structured denoising diffusion models in discrete state-spaces,” Advances in Neural Information Processing Systems, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' 34, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' [47] J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Ho, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Jain, and P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Abbeel, “Denoising diffusion probabilistic models,” Advances in Neural Information Processing Systems, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' 33, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' 6840– 6851, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' [48] M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Belghazi, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Baratin, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Rajeshwar, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Ozair, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Bengio, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Courville, and D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Hjelm, “Mutual information neural estimation,” in Proceedings of the 35th International Conference on Machine Learning (J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Dy and A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Krause, eds.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' ), vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' 80 of Proceedings of Machine Learning Research, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' 531–540, PMLR, 10–15 Jul 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' [49] P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Cheng, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Hao, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Dai, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Liu, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Gan, and L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Carin, “Club: A con- trastive log-ratio upper bound of mutual information,” in International conference on machine learning, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' 1779–1788, PMLR, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' [50] Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Liu, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Ott, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Goyal, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Du, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Joshi, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Chen, O.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Levy, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Lewis, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Zettlemoyer, and V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Stoyanov, “Roberta: A robustly optimized bert pretraining approach,” arXiv preprint arXiv:1907.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content='11692, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' [51] T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Gao, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Yao, and D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Chen, “Simcse: Simple contrastive learning of sentence embeddings,” arXiv preprint arXiv:2104.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content='08821, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' [52] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' v.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' d.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Oord, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Li, and O.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Vinyals, “Representation learning with contrastive predictive coding,” arXiv preprint arXiv:1807.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content='03748, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' [53] Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content='-W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Chao, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Yang, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Gu, and Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Zou, “3cmlf: Three-stage curriculum-based mutual learning framework for audio-text retrieval,” in 2022 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA ASC), pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' 1602–1607, IEEE, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' [54] S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Chopra, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Hadsell, and Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' LeCun, “Learning a similarity metric discriminatively, with application to face verification,” in 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recog- nition (CVPR’05), vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' 1, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' 539–546, IEEE, 2005.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' [55] X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Tan, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Qin, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Soong, and T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content='-Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Liu, “A survey on neural speech synthesis,” arXiv preprint arXiv:2106.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content='15561, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' [56] J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Kong, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Kim, and J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Bae, “Hifi-gan: Generative adversarial networks for efficient and high fidelity speech synthesis,” Advances in Neural Information Processing Systems, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' 33, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' 17022–17033, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' [57] C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Du, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Guo, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Chen, and K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Yu, “Vqtts: High-fidelity text-to- speech synthesis with self-supervised vq acoustic feature,” arXiv preprint arXiv:2204.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content='00768, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' [58] P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Isola, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content='-Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Zhu, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Zhou, and A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Efros, “Image-to-image translation with conditional adversarial networks,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' 1125– 1134, 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' [59] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Vaswani, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Shazeer, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Parmar, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Uszkoreit, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Jones, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Gomez, Ł.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Kaiser, and I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Polosukhin, “Attention is all you need,” Advances in neural information processing systems, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' 30, 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' [60] S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Gu, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Chen, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Bao, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Wen, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Zhang, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Chen, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Yuan, and B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Guo, “Vector quantized diffusion model for text-to-image synthesis,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' 10696–10706, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' [61] J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Ho and T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Salimans, “Classifier-free diffusion guidance,” arXiv preprint arXiv:2207.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content='12598, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' [62] Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Tang, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Gu, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Bao, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Chen, and F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Wen, “Improved vector quantized diffusion models,” arXiv preprint arXiv:2205.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content='16007, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' [63] V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Panayotov, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Chen, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Povey, and S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Khudanpur, “Librispeech: an asr corpus based on public domain audio books,” in 2015 IEEE international conference on acoustics, speech and signal processing (ICASSP), pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' 5206–5210, IEEE, 2015.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' [64] D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Kingma and J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Ba, “Adam: A method for stochastic optimization,” arXiv preprint arXiv:1412.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content='6980, 2014.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' [65] I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Loshchilov and F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Hutter, “Decoupled weight decay regularization,” arXiv preprint arXiv:1711.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content='05101, 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' [66] R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Kubichek, “Mel-cepstral distance measure for objective speech qual- ity assessment,” in Proceedings of IEEE pacific rim conference on communications computers and signal processing, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' 1, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' 125–128, IEEE, 1993.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' [67] Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Wang, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Bovik, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Sheikh, and E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE transactions on image processing, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' 13, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' 4, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' 600–612, 2004.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' [68] C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Taal, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Hendriks, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Heusdens, and J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Jensen, “A short- time objective intelligibility measure for time-frequency weighted noisy speech,” in 2010 IEEE international conference on acoustics, speech and signal processing, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' 4214–4217, IEEE, 2010.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' [69] T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Nakatani, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Amano, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Irino, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Ishizuka, and T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Kondo, “A method for fundamental frequency estimation and voicing decision: Application to infant utterances recorded in real acoustical environments,” Speech Communication, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' 50, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' 3, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' 203–214, 2008.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' [70] W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Chu and A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Alwan, “Reducing f0 frame error of f0 tracking algorithms under noisy conditions with an unvoiced/voiced classification frontend,” in 2009 IEEE International Conference on Acoustics, Speech and Signal Processing, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' 3969–3972, IEEE, 2009.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' [71] D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Bitouk, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Verma, and A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Nenkova, “Class-level spectral features for emotion recognition,” Speech communication, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' 52, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' 7-8, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' 613– 625, 2010.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' [72] S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Liu, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Su, and D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Yu, “Diffgan-tts: High-fidelity and efficient text-to- speech with denoising diffusion gans,” arXiv preprint arXiv:2201.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content='11972, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' [73] W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Johnson, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Emde, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Scherer, and M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Klinnert, “Recog- nition of emotion from vocal cues,” Archives of General Psychiatry, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' 43, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' 3, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' 280–283, 1986.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' JOURNAL OF LATEX CLASS FILES, VOL.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' 14, NO.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' 8, AUGUST 2021 14 [74] M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Owren and J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content='-A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Bachorowski, “Measuring emotion-related vocal acoustics,” Handbook of emotion elicitation and assessment, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' 239– 266, 2007.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' [75] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Baevski, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Zhou, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Mohamed, and M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' Auli, “wav2vec 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content='0: A framework for self-supervised learning of speech representations,” Advances in neural information processing systems, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' 33, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'} +page_content=' 12449– 12460, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MdFRT4oBgHgl3EQf2zi9/content/2301.13662v1.pdf'}