{ "paper_id": "2021", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T03:12:33.373657Z" }, "title": "Multi-accent Speech Separation with One Shot Learning", "authors": [ { "first": "Kuan", "middle": [ "Po" ], "last": "Huang", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Yuan-Kuei", "middle": [], "last": "Wu", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Hung-Yi", "middle": [], "last": "Lee", "suffix": "", "affiliation": {}, "email": "hungyilee@ntu.edu" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Speech separation is a problem in the field of speech processing that has been studied in full swing recently. However, there has not been much work studying a multi-accent speech separation scenario. Unseen speakers with new accents and noise aroused the domain mismatch problem which cannot be easily solved by conventional joint training methods. Thus, we applied MAML and FOMAML to tackle this problem and obtained higher average Si-SNRi values than joint training on almost all the unseen accents. This proved that these two methods do have the ability to generate well-trained parameters for adapting to speech mixtures of new speakers and accents. Furthermore, we found out that FOMAML obtains similar performance compared to MAML while saving a lot of time.", "pdf_parse": { "paper_id": "2021", "_pdf_hash": "", "abstract": [ { "text": "Speech separation is a problem in the field of speech processing that has been studied in full swing recently. However, there has not been much work studying a multi-accent speech separation scenario. Unseen speakers with new accents and noise aroused the domain mismatch problem which cannot be easily solved by conventional joint training methods. Thus, we applied MAML and FOMAML to tackle this problem and obtained higher average Si-SNRi values than joint training on almost all the unseen accents. This proved that these two methods do have the ability to generate well-trained parameters for adapting to speech mixtures of new speakers and accents. Furthermore, we found out that FOMAML obtains similar performance compared to MAML while saving a lot of time.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Speech separation has been a well-known task to solve in the speech processing field. Many model architectures mentioned in Section 2 have been proposed and achieved high performance. This suggests that deep learning based methods are suitable for the speech separation task.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Despite having promising results, the generalizability of these models is still questionable. The performance of switching to different datasets or environments is not guaranteed. A straightforward solution is to exhaustively collect data under all kinds of environment settings and train a model with these data jointly. Although this may sound reasonable, it is difficult to always consider every situation during training. To make sure that models can be quickly adapted to mixtures spoken by new speakers with not many samples, metalearning comes to the rescue. Meta-learning has * The two first authors made equal contributions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "been widely applied on different speech tasks, especially on speech recognition mentioned in Section 2. Nonetheless, there is not much work that applied meta-learning on the speech separation task. In our previous work, (Wu et al., 2020) , we first proposed to solve the speech separation problem with meta-learning. Their setting is viewing utterance mixtures of two different speakers as a meta task. These speakers have the same accents. However, we hope that a speech separation model can have the ability to adapt to mixtures with accents never seen before. Thus, besides the setting of two different speakers forming a meta task, we also added a setting that meta tasks with speakers of same accents form an accent task set. Section 4 and 5.1 describe more about the dataset and task construction procedure.", "cite_spans": [ { "start": 220, "end": 237, "text": "(Wu et al., 2020)", "ref_id": "BIBREF21" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 To our best knowledge, we are the first to conduct speech separation experiments on a multi-accent dataset.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Our contributions are listed below:", "sec_num": null }, { "text": "\u2022 We applied meta-learning to help improve the multi-accent speech recognition task.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Our contributions are listed below:", "sec_num": null }, { "text": "The remaining sections of this paper are organized as follows. In Section 2, we give a brief overview of existing works related to speech separation and meta-learning. In Section 3, we elaborate the problem formulation of speech separation in detail. In Section 4, we list out the two phases of MAML, including the meta training phase and meta testing phase. Additionally, we show how FO-MAML is modified from MAML. The experimental setup, dataset, and model we used are presented in Section 5. Finally, results and conclusions are given in Section 6 and 7. 1: Illustration of joint training and meta-learning for multi-accent speech separation. The oval area is the accent task sets. Each accent task set contains multiple meta tasks. The solid lines are the pretraining process, joint training on the left, and meta-learning on the right. The dashed lines represent the adaptation paths from parameters \u03b8 to the unseen accents of unseen speakers. This figure is modified from Gu et al. (2018) and our previous work Wu et al. (2020) .", "cite_spans": [ { "start": 978, "end": 994, "text": "Gu et al. (2018)", "ref_id": "BIBREF4" }, { "start": 1017, "end": 1033, "text": "Wu et al. (2020)", "ref_id": "BIBREF21" } ], "ref_spans": [], "eq_spans": [], "section": "Our contributions are listed below:", "sec_num": null }, { "text": "Speech Separation End-to-end separation models have shown great success in separating speech mixtures of the WSJ0-2mix dataset designed by (Hershey et al., 2016) which is generated from the WSJ0 corpus (Paul and Baker, 1992) . (Luo and Mesgarani, 2018) came up with a time-domain audio separation network (TasNet) that takes waveforms as input to alleviate the separation model from dealing with time-frequency representations. They further proposed convolutional TasNet (Luo and Mesgarani, 2019) which substitutes the LSTM layers in TasNet with convolutional layers. This overcame the problem of long temporal dependencies of LSTM and reduced the model size. Before long, they came up will the Dual-path RNN model, which used intra-and inter-blocks to capture local and global information dependencies within the speech mixtures. (Nachmani et al., 2020) utilized the idea of Dual-path RNN and added a speaker identity loss to improve performance on separating mixtures with an unknown number of speakers. (Tzinis et al., 2020) proposed to use a separator constructed with U-ConvBlocks which can not only reduce the number of layers while still having high performance but also require less computational resources and time. This helped the model to more likely be used in real-time speech separation. (Zeghidour and Grangier, 2020) integrated speaker identity information into the separating process, and obtained state-of-the-art performance.", "cite_spans": [ { "start": 139, "end": 161, "text": "(Hershey et al., 2016)", "ref_id": "BIBREF5" }, { "start": 202, "end": 224, "text": "(Paul and Baker, 1992)", "ref_id": "BIBREF12" }, { "start": 227, "end": 252, "text": "(Luo and Mesgarani, 2018)", "ref_id": "BIBREF9" }, { "start": 471, "end": 496, "text": "(Luo and Mesgarani, 2019)", "ref_id": "BIBREF10" }, { "start": 831, "end": 854, "text": "(Nachmani et al., 2020)", "ref_id": "BIBREF11" }, { "start": 1006, "end": 1027, "text": "(Tzinis et al., 2020)", "ref_id": "BIBREF16" }, { "start": 1302, "end": 1332, "text": "(Zeghidour and Grangier, 2020)", "ref_id": "BIBREF23" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Meta-learning Meta-learning has recently become a trend when it comes to solving multitask problems. This training method has been widely applied in the computer vision field, for instance, (Vinyals et al., 2016; Rusu et al., 2018; Sun et al., 2019) . Meta-learning is also used in the natural language processing field. (Gu et al., 2018) used MAML (Finn et al., 2017) for low-resource neural machine translation (NMT). Moreover, in the speech processing domain, some speech-related problems are solved with metalearning, too. (Winata et al., 2020) applied metatransfer learning on code-switched speech recognition. (Xiao et al., 2020; applied meta-learning to solve the multilingual lowresource speech recognition problem. (Winata et al., 2019) also used MAML to adapt models to unseen accents on speech recognition. (Indurthi et al., 2019) adopted meta-learning algorithms to perform speech translation on speech-transcript paired low-resource data. (Chen et al., 2021) came up with some improvements of meta-learning to help the speaker verification task.", "cite_spans": [ { "start": 190, "end": 212, "text": "(Vinyals et al., 2016;", "ref_id": "BIBREF17" }, { "start": 213, "end": 231, "text": "Rusu et al., 2018;", "ref_id": "BIBREF13" }, { "start": 232, "end": 249, "text": "Sun et al., 2019)", "ref_id": "BIBREF14" }, { "start": 321, "end": 338, "text": "(Gu et al., 2018)", "ref_id": "BIBREF4" }, { "start": 349, "end": 368, "text": "(Finn et al., 2017)", "ref_id": "BIBREF3" }, { "start": 616, "end": 635, "text": "(Xiao et al., 2020;", "ref_id": "BIBREF22" }, { "start": 818, "end": 841, "text": "(Indurthi et al., 2019)", "ref_id": "BIBREF7" }, { "start": 952, "end": 971, "text": "(Chen et al., 2021)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "In this work, we perform single channel speech separation. Given a mixture", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Speech Separation", "sec_num": "3" }, { "text": "x = C c=1 s c (1)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Speech Separation", "sec_num": "3" }, { "text": "where C is the number of speakers in mixture x \u2208 R T and s c \u2208 R T are the ground truth sources. For speech separation, the goal is to estimate C sources {\u015d 1 , \u2022 \u2022 \u2022 ,\u015d C } \u2208 R T such that the estimates sources are as similar as the ground truth sources. The model we used in this work is Conv-TasNet (Luo and Mesgarani, 2019) . In their work, the similarity of the estimated sources and ground truth sources are measured by scaleinvariant signal-to-noise ratio (Si-SNR) shown in Eq. 4:", "cite_spans": [ { "start": 302, "end": 327, "text": "(Luo and Mesgarani, 2019)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Speech Separation", "sec_num": "3" }, { "text": "s proj = s \u2022\u015d s 2 s (2) error =\u015d \u2212 s proj (3) Si-SNR = 10 log 10 s proj 2 error 2 (4)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Speech Separation", "sec_num": "3" }, { "text": "The Conv-TasNet model is a mask-based model which consists of an encoder, separator, and decoder. The encoder encodes the mixture x to a latent space as shown in Eq. 5.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Speech Separation", "sec_num": "3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "x enc = enc(x)", "eq_num": "(5)" } ], "section": "Speech Separation", "sec_num": "3" }, { "text": "x enc \u2208 R H\u00d7T is the encoder output, where H is the dimension of the latent space and T is the length of x enc . The separator then calculates C masks", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Speech Separation", "sec_num": "3" }, { "text": "m i \u2208 R H\u00d7T , i \u2208 {1, \u2022 \u2022 \u2022 , C} based on", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Speech Separation", "sec_num": "3" }, { "text": "x enc shown in Eq.(6).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Speech Separation", "sec_num": "3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "m i = sep(x enc )", "eq_num": "(6)" } ], "section": "Speech Separation", "sec_num": "3" }, { "text": "The masks are then multiplied with the encoder output, forming separated features d i shown in Eq. 7,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Speech Separation", "sec_num": "3" }, { "text": "d i = x enc m i (7)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Speech Separation", "sec_num": "3" }, { "text": "where is the element-wise multiplication. The separated features d i can be viewed as source representations, and are further input to a decoder to estimate separated sources shown in Eq. 8.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Speech Separation", "sec_num": "3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "s i = dec(d i )", "eq_num": "(8)" } ], "section": "Speech Separation", "sec_num": "3" }, { "text": "At this point, before measuring the estimated sources with Si-SNR, there is a label permutation problem. An align between", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Speech Separation", "sec_num": "3" }, { "text": "{\u015d 1 , \u2022 \u2022 \u2022 ,\u015d C } and {s 1 , \u2022 \u2022", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Speech Separation", "sec_num": "3" }, { "text": "\u2022 , s C } needs to be decided. We used the utterance-level permutation invariant training(uPIT) method described in (Kolbaek et al., 2017) to solve this problem.", "cite_spans": [ { "start": 116, "end": 138, "text": "(Kolbaek et al., 2017)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Speech Separation", "sec_num": "3" }, { "text": "The procedure of MAML (Finn et al., 2017) is stated as follows. Given a set of multi-accent tasks", "cite_spans": [ { "start": 22, "end": 41, "text": "(Finn et al., 2017)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "MAML", "sec_num": "4" }, { "text": "T = {{T i 1 } tq 1 i=1 , \u2022 \u2022 \u2022 , {T i K } tq K i=1 },", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "MAML", "sec_num": "4" }, { "text": "where K is the number of accents.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "MAML", "sec_num": "4" }, { "text": "T k = {T i k } tq k i=1", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "MAML", "sec_num": "4" }, { "text": "is the accent task set containing tasks only with the k th accent and tq k denotes the task quantity of the k th accent task set. The set of tasks T is split into the source task set T source and the target task set T target . The model denoted as f , will be trained on the source task set T source in the hope of having the ability to quickly adapt to the target task set T target .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "MAML", "sec_num": "4" }, { "text": "During the meta training phase, the MAML algorithm aims to find initialized parameters \u03b8 that can further be quickly adapted to new tasks. Moreover, these initialized parameters should be sensitive to the difference between two different tasks, such that adaptation of the initialized parameters can significantly improve the performance on new tasks sampled from the source task set T source . This is achieved by the inner loop and outer loop optimization. A batch of tasks \u03c4 source = {\u03c4 1 , \u2022 \u2022 \u2022 , \u03c4 b } is sampled from T proportional to the task quantity of every accent task set, e.g., for an accent task set T k , the larger tq k is, the more likely a task is to be sampled from it. Each task in \u03c4 source is further split into a support set \u03c4 sup and a query set \u03c4 qry . The support set is used to adapt the model parameters by performing a one-step gradient decent, which is known as the inner loop shown in Eq.(9).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Meta Training Phase", "sec_num": "4.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "\u03b8 j \u2190 \u03b8 \u2212 \u03b1\u2207 \u03b8 L \u03c4 sup j (f \u03b8 )", "eq_num": "(9)" } ], "section": "Meta Training Phase", "sec_num": "4.1" }, { "text": "where \u03b1 is the learning rate. The goal of the inner loop is to minimize the loss of \u03c4 sup j with respect to f \u03b8 . More concisely,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Meta Training Phase", "sec_num": "4.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "\u03b8 j = arg min \u03b8 L \u03c4 sup j (f \u03b8 )", "eq_num": "(10)" } ], "section": "Meta Training Phase", "sec_num": "4.1" }, { "text": "At this point, the sum of the query loss of each query set in \u03c4 source is calculated by", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Meta Training Phase", "sec_num": "4.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "L qry = b j=1 L \u03c4 qry j (f \u03b8 j )", "eq_num": "(11)" } ], "section": "Meta Training Phase", "sec_num": "4.1" }, { "text": "The goal of the meta training phase is to minimize the total loss of the query sets. This is also performed by a one-step gradient decent, known as the outer loop shown in Eq. 12.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Meta Training Phase", "sec_num": "4.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "\u03b8 \u2190 \u03b8 \u2212 \u03b2\u2207 \u03b8 L qry", "eq_num": "(12)" } ], "section": "Meta Training Phase", "sec_num": "4.1" }, { "text": "During the meta testing phase, we perform a procedure (see Eq.(13)) similar to the inner loop in the meta training phase. This procedure adapts the parameters \u03b8 obtained in the meta training phase to the target tasks", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Meta Testing Phase", "sec_num": "4.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "\u03c4 target = {\u03c4 1 , \u2022 \u2022 \u2022 , \u03c4 b }. \u03b8 j \u2190 \u03b8 \u2212 \u03b2\u2207 \u03b8 L \u03c4 sup j (f \u03b8 )", "eq_num": "(13)" } ], "section": "Meta Testing Phase", "sec_num": "4.2" }, { "text": "4.3 First-order MAML (FOMAML)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Meta Testing Phase", "sec_num": "4.2" }, { "text": "Eq. 14is the calculation of the gradient in the outer loop, where L \u03c4 qry j is denoted as L j for simplicity.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Meta Testing Phase", "sec_num": "4.2" }, { "text": "\u2207 \u03b8 L qry = \u2207 \u03b8 b j=1 L j (f \u03b8 j ) = b j=1 \u2207 \u03b8 L j (f \u03b8 j ) (14)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Meta Testing Phase", "sec_num": "4.2" }, { "text": "When performing the outer loop during the meta training phase, high computational cost is needed to calculate the second-order derivatives with backpropagation. Eq. 15is the first-order approximation of the second-order derivative,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Meta Testing Phase", "sec_num": "4.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "\u2202L j (f \u03b8 j ) \u2202\u03b8 d = D i=1 \u2202L j (f \u03b8 j ) \u2202\u03b8 i j \u2202\u03b8 i j \u2202\u03b8 d \u2248 \u2202L j (f \u03b8 j ) \u2202\u03b8 d j", "eq_num": "(15)" } ], "section": "Meta Testing Phase", "sec_num": "4.2" }, { "text": "where \u03b8 is a D dimensional parameter, \u03b8 d is the dth dimension of \u03b8 and \u03b8 i j is the i-th dimension of \u03b8 j . The difference between FOMAML and MAML is that this approximation is used instead of the second-order derivatives. Thus, compared to MAML, FOMAML can save a lot of computational time, resulting in a faster gradient calculation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Meta Testing Phase", "sec_num": "4.2" }, { "text": "The multi-accent speech utterances are collected from the speech accent archive (Weinberger, 2014). This archive currently has more than 200 kinds of accents and 2939 samples. Each native or nonnative speaker speaks the same English paragraph. We selected 123 accents that contain more than one speaker since we need utterances of two different speakers to generate mixtures. We split these accents into three sets, 85 accents for generating the training tasks and 19 accents each for generating the developing and testing tasks. The utterance of each speaker is split into segments with a duration of 4 seconds. For each accent, we construct meta tasks by following the task construction method Figure 2 : Illustration of a meta task. For two different speakers with the same accent, we sample 3 utterance segments to form a meta task. Thus, there will be 9 mixtures. However, during training, we only sample one mixture to form the support set since our setting is one shot learning. The other 4 mixtures that do not contain the utterance segments in the support set are selected to form the query set. described in (Wu et al., 2020) . We select at most 12 speakers for each accent and generate speech mixtures for each pair of speakers with the same accents. Thus, there will be at most 12 2 = 66 meta tasks and at least 2 2 = 1 meta task for each accent. In each meta task, 3 utterance segments are selected from each speaker and mixed with an SNR level randomly selected between 0 to 5 dB and resampled at an 8kHz sample rate. This results in 3 \u00d7 3 = 9 speech mixtures in one meta task. Fig.(2) is an illustration describing the support set and query set of a meta task. Finally, for the training, developing, and testing set, 22.4, 3.8, and 3.9 hours of speech mixtures are generated.", "cite_spans": [ { "start": 1118, "end": 1135, "text": "(Wu et al., 2020)", "ref_id": "BIBREF21" } ], "ref_spans": [ { "start": 696, "end": 704, "text": "Figure 2", "ref_id": null }, { "start": 1592, "end": 1599, "text": "Fig.(2)", "ref_id": null } ], "eq_spans": [], "section": "Dataset", "sec_num": "5.1" }, { "text": "The model we used is Conv-TasNet (Luo and Mesgarani, 2019) . It consists of an encoder, separator, and a decoder. The encoder is a 1-dim convolution, which transforms the input mixture into a representation. The separator then calculates two masks based on the encoder output. More specifically, it consists of R stacks of temporal convolutional networks (TCN). Each TCN layer consists of M 1-dim exponentially increasing dilated convolutional blocks. These M blocks each have a residual connection and a skip connection. The residual connection is the input of the next block and the skip connection of all blocks are summed together, passing a parametric relu, linear projection, and a sigmoid function to produce two masks. The two masks are multiplied with the representation output from the encoder respectively and further input into the decoder to generate two separate waveforms of the two speakers. The decoder is also a 1-dim convolution. The configuration that we used is the one that obtained the best performance reported in (Luo and Mesgarani, 2019) . Figure 3 : For fine-tuning after joint training, we evaluated the performance by adjusting the learning rate \u03b2 in the range of 10 \u22125 to 10 \u22121 .", "cite_spans": [ { "start": 33, "end": 58, "text": "(Luo and Mesgarani, 2019)", "ref_id": "BIBREF10" }, { "start": 1038, "end": 1063, "text": "(Luo and Mesgarani, 2019)", "ref_id": "BIBREF10" } ], "ref_spans": [ { "start": 1066, "end": 1074, "text": "Figure 3", "ref_id": null } ], "eq_spans": [], "section": "Model", "sec_num": "5.2" }, { "text": "There are many other works such as Tong et al., 2017) , that try to solve the domain mismatch problem, where the source domain and target domain datasets do not have a similar distribution. Joint training refers to pretraining a model with different source domain data together. Transfer learning refers to adapting the pretrained model to some partial target domain data and testing the fine-tuned model on the target domain data. The most common adaptation method is fine-tuning. Moreover, the domain mismatch scenario has a low-resource problem if the target domain has only fewer data compared to the scale of the source domain data. There are also several works that tried to solve this problem, such as (Chen and Mak, 2015; Zoph et al., 2016; . Our jointly trained model is also based on this low-resource scenario.", "cite_spans": [ { "start": 35, "end": 53, "text": "Tong et al., 2017)", "ref_id": "BIBREF15" }, { "start": 709, "end": 729, "text": "(Chen and Mak, 2015;", "ref_id": "BIBREF0" }, { "start": 730, "end": 748, "text": "Zoph et al., 2016;", "ref_id": "BIBREF24" } ], "ref_spans": [], "eq_spans": [], "section": "Joint Training and Transfer Learning", "sec_num": "5.3" }, { "text": "To deal with the domain mismatch and lowresource problem, we applied MAML as our training method in the hope of performing better than joint training. We set the number of the support set in each task as 1, meaning that the model needs to have the ability to adapt to a new task by only seeing one speech mixture of two new different speakers with a new accent never seen before. We also trained our model with FOMAML in order to know whether calculating gradients with firstorder approximation still obtains relatively good performance compared to training with MAML.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "MAML and FOMAML", "sec_num": "5.4" }, { "text": "For both the joint training and MAML methods, we trained the model from randomly initialized parameters for 100 epochs with the Adam optimizer of 0.001 learning rate and 0.00001 weight decay. For the MAML methods, during the meta training phase, we set \u03b1 = 0.01. For joint training, we also fine-tuned the model parameters with the method in Eq.(13). We tested the fine-tuning learning rate \u03b2 on the testing set, reported it in section 6, and used the learning rates that obtained the best performance for joint training as our baseline. However, for the models trained with MAML methods, the fine-tuning learning rate \u03b2 is fixed at 0.01 since other values lead to significant performance degradation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiment Settings", "sec_num": "5.5" }, { "text": "For joint training, we tested the fine-tuning learning rate \u03b2 on the testing set as shown in Fig.(3) , and found out that \u03b2 = 5e\u22124 obtained the best performance on the clean testing set, while \u03b2 = 1e\u22123 obtained the best performance on the testing set with noise. We use these two experiment settings as our baseline.", "cite_spans": [], "ref_spans": [ { "start": 93, "end": 100, "text": "Fig.(3)", "ref_id": null } ], "eq_spans": [], "section": "Joint Training", "sec_num": "6.1" }, { "text": "Comparing models (d), (f) with model (b), we can see that MAML and FOMAML perform better than the joint training baseline. This suggests that the initial model parameters obtained by MAML and FOMAML have the better potential to be adapted to new unseen tasks. Besides, the standard deviation of the testing accent task sets of models (d) and (f) are both less than model (b). This implies that the performance of the models trained with MAML and FOMAML have small dispersion with respect to the mean Si-SNRi value of all the accents compared to the model jointly trained. From Fig.(4) h a u s a l i t h u a n i a n i l r u s s i a n t h a i i t a l i a n e w e m e n d e m a l a y b a s q u e a l b a n i a n g a e s t o n i a n r o t u m a n on all accents when there is no noise involved and performs better on most of the accents when there is noise in the mixtures.", "cite_spans": [], "ref_spans": [ { "start": 577, "end": 584, "text": "Fig.(4)", "ref_id": "FIGREF3" } ], "eq_spans": [], "section": "MAML and FOMAML", "sec_num": "6.2" }, { "text": "By comparing models (d) and (f), we found out that these two training methods have similar performance. Model (d) has a slightly higher performance than model (f) under the circumstances that the mixtures are clean in the testing tasks, while model (d) has a slightly lower performance than model (f) under the circumstances that there is noise in the testing tasks. However, MAML requires more than 10 times the training time compared to FOMAML, indicating that the first-order approximation takes advantage over calculating the second-order derivatives by saving a lot of time while still obtaining similar performance. Moreover, FOMAML without fine-tuning (model (c)) has similar performance compared to the baseline model, and yet somehow, initialized parameters obtained by MAML (model (e)) do not have the ability to perform speech separation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "MAML and FOMAML", "sec_num": "6.2" }, { "text": "Our results show that MAML and FOMAML training methods are effective on multi-accent speech separation. More specifically, it is confirmed that these two methods are better than joint training when adapting to new speakers with new accents and even noisy environments. Besides, FOMAML is shown to be sufficient for dealing with the multiaccent speech separation task and can reduce a large amount of training time. Despite the fact that FOMAML outperforms joint training on the testing set, we can still see that the performance of each accent task set varies a lot from Fig.(4) . This is probably due to the task-difficulty imbalance issue described in (Xiao et al., 2020) , perhaps some speakers with special accents may be hard to separate. Thus, in the future, we will try to solve this problem with meta sampling methods mentioned in (Xiao et al., 2020) .", "cite_spans": [ { "start": 654, "end": 673, "text": "(Xiao et al., 2020)", "ref_id": "BIBREF22" }, { "start": 839, "end": 858, "text": "(Xiao et al., 2020)", "ref_id": "BIBREF22" } ], "ref_spans": [ { "start": 571, "end": 578, "text": "Fig.(4)", "ref_id": "FIGREF3" } ], "eq_spans": [], "section": "Conclusion", "sec_num": "7" } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Multitask learning of deep neural networks for lowresource speech recognition", "authors": [ { "first": "Dongpeng", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Brian Kan-Wing", "middle": [], "last": "Mak", "suffix": "" } ], "year": 2015, "venue": "IEEE/ACM Transactions on Audio, Speech, and Language Processing", "volume": "23", "issue": "7", "pages": "1172--1183", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dongpeng Chen and Brian Kan-Wing Mak. 2015. Mul- titask learning of deep neural networks for low- resource speech recognition. IEEE/ACM Transac- tions on Audio, Speech, and Language Processing, 23(7):1172-1183.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Improved meta-learning training for speaker verification", "authors": [ { "first": "Yafeng", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Wu", "middle": [], "last": "Guo", "suffix": "" }, { "first": "Bin", "middle": [], "last": "Gu", "suffix": "" } ], "year": 2021, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2103.15421" ] }, "num": null, "urls": [], "raw_text": "Yafeng Chen, Wu Guo, and Bin Gu. 2021. Im- proved meta-learning training for speaker verifica- tion. arXiv preprint arXiv:2103.15421.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Darts-asr: Differentiable architecture search for multilingual speech recognition and adaptation", "authors": [ { "first": "Yi-Chen", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Jui-Yang", "middle": [], "last": "Hsu", "suffix": "" }, { "first": "Cheng-Kuang", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Hung-Yi", "middle": [], "last": "Lee", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2005.07029" ] }, "num": null, "urls": [], "raw_text": "Yi-Chen Chen, Jui-Yang Hsu, Cheng-Kuang Lee, and Hung-yi Lee. 2020. Darts-asr: Differentiable archi- tecture search for multilingual speech recognition and adaptation. arXiv preprint arXiv:2005.07029.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Model-agnostic meta-learning for fast adaptation of deep networks", "authors": [ { "first": "Chelsea", "middle": [], "last": "Finn", "suffix": "" }, { "first": "Pieter", "middle": [], "last": "Abbeel", "suffix": "" }, { "first": "Sergey", "middle": [], "last": "Levine", "suffix": "" } ], "year": 2017, "venue": "International Conference on Machine Learning", "volume": "", "issue": "", "pages": "1126--1135", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chelsea Finn, Pieter Abbeel, and Sergey Levine. 2017. Model-agnostic meta-learning for fast adaptation of deep networks. In International Conference on Ma- chine Learning, pages 1126-1135. PMLR.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Meta-learning for lowresource neural machine translation", "authors": [ { "first": "Jiatao", "middle": [], "last": "Gu", "suffix": "" }, { "first": "Yong", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Yun", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Kyunghyun", "middle": [], "last": "Cho", "suffix": "" }, { "first": "O", "middle": [ "K" ], "last": "Victor", "suffix": "" }, { "first": "", "middle": [], "last": "Li", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1808.08437" ] }, "num": null, "urls": [], "raw_text": "Jiatao Gu, Yong Wang, Yun Chen, Kyunghyun Cho, and Victor OK Li. 2018. Meta-learning for low- resource neural machine translation. arXiv preprint arXiv:1808.08437.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Deep clustering: Discriminative embeddings for segmentation and separation", "authors": [ { "first": "Zhuo", "middle": [], "last": "John R Hershey", "suffix": "" }, { "first": "Jonathan", "middle": [ "Le" ], "last": "Chen", "suffix": "" }, { "first": "Shinji", "middle": [], "last": "Roux", "suffix": "" }, { "first": "", "middle": [], "last": "Watanabe", "suffix": "" } ], "year": 2016, "venue": "2016 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)", "volume": "", "issue": "", "pages": "31--35", "other_ids": {}, "num": null, "urls": [], "raw_text": "John R Hershey, Zhuo Chen, Jonathan Le Roux, and Shinji Watanabe. 2016. Deep clustering: Discrimi- native embeddings for segmentation and separation. In 2016 IEEE International Conference on Acous- tics, Speech and Signal Processing (ICASSP), pages 31-35. IEEE.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Meta learning for end-to-end low-resource speech recognition", "authors": [ { "first": "Jui-Yang", "middle": [], "last": "Hsu", "suffix": "" }, { "first": "Yuan-Jui", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Hung-Yi", "middle": [], "last": "Lee", "suffix": "" } ], "year": 2020, "venue": "ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)", "volume": "", "issue": "", "pages": "7844--7848", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jui-Yang Hsu, Yuan-Jui Chen, and Hung-yi Lee. 2020. Meta learning for end-to-end low-resource speech recognition. In ICASSP 2020-2020 IEEE Interna- tional Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 7844-7848. IEEE.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Data efficient direct speech-to-text translation with modality agnostic meta-learning", "authors": [ { "first": "Sathish", "middle": [], "last": "Indurthi", "suffix": "" }, { "first": "Houjeung", "middle": [], "last": "Han", "suffix": "" }, { "first": "Nikhil", "middle": [], "last": "Kumar Lakumarapu", "suffix": "" }, { "first": "Beomseok", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Insoo", "middle": [], "last": "Chung", "suffix": "" }, { "first": "Sangha", "middle": [], "last": "Kim", "suffix": "" }, { "first": "Chanwoo", "middle": [], "last": "Kim", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1911.04283" ] }, "num": null, "urls": [], "raw_text": "Sathish Indurthi, Houjeung Han, Nikhil Kumar Laku- marapu, Beomseok Lee, Insoo Chung, Sangha Kim, and Chanwoo Kim. 2019. Data efficient direct speech-to-text translation with modality agnostic meta-learning. arXiv preprint arXiv:1911.04283.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Multitalker speech separation with utterance-level permutation invariant training of deep recurrent neural networks", "authors": [ { "first": "Morten", "middle": [], "last": "Kolbaek", "suffix": "" }, { "first": "Dong", "middle": [], "last": "Yu", "suffix": "" }, { "first": "Zheng-Hua", "middle": [], "last": "Tan", "suffix": "" }, { "first": "Jesper", "middle": [], "last": "Jensen", "suffix": "" } ], "year": 2017, "venue": "IEEE/ACM Transactions on Audio, Speech, and Language Processing", "volume": "25", "issue": "10", "pages": "1901--1913", "other_ids": {}, "num": null, "urls": [], "raw_text": "Morten Kolbaek, Dong Yu, Zheng-Hua Tan, and Jes- per Jensen. 2017. Multitalker speech separation with utterance-level permutation invariant training of deep recurrent neural networks. IEEE/ACM Transactions on Audio, Speech, and Language Pro- cessing, 25(10):1901-1913.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Tasnet: timedomain audio separation network for real-time, single-channel speech separation", "authors": [ { "first": "Yi", "middle": [], "last": "Luo", "suffix": "" }, { "first": "Nima", "middle": [], "last": "Mesgarani", "suffix": "" } ], "year": 2018, "venue": "2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)", "volume": "", "issue": "", "pages": "696--700", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yi Luo and Nima Mesgarani. 2018. Tasnet: time- domain audio separation network for real-time, single-channel speech separation. In 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 696-700. IEEE.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Conv-tasnet: Surpassing ideal time-frequency magnitude masking for speech separation", "authors": [ { "first": "Yi", "middle": [], "last": "Luo", "suffix": "" }, { "first": "Nima", "middle": [], "last": "Mesgarani", "suffix": "" } ], "year": 2019, "venue": "IEEE/ACM transactions on audio, speech, and language processing", "volume": "27", "issue": "", "pages": "1256--1266", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yi Luo and Nima Mesgarani. 2019. Conv-tasnet: Surpassing ideal time-frequency magnitude mask- ing for speech separation. IEEE/ACM transac- tions on audio, speech, and language processing, 27(8):1256-1266.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Voice separation with an unknown number of multiple speakers", "authors": [ { "first": "Eliya", "middle": [], "last": "Nachmani", "suffix": "" }, { "first": "Yossi", "middle": [], "last": "Adi", "suffix": "" }, { "first": "Lior", "middle": [], "last": "Wolf", "suffix": "" } ], "year": 2020, "venue": "International Conference on Machine Learning", "volume": "", "issue": "", "pages": "7164--7175", "other_ids": {}, "num": null, "urls": [], "raw_text": "Eliya Nachmani, Yossi Adi, and Lior Wolf. 2020. Voice separation with an unknown number of mul- tiple speakers. In International Conference on Ma- chine Learning, pages 7164-7175. PMLR.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "The design for the wall street journal-based csr corpus", "authors": [ { "first": "B", "middle": [], "last": "Douglas", "suffix": "" }, { "first": "Janet", "middle": [], "last": "Paul", "suffix": "" }, { "first": "", "middle": [], "last": "Baker", "suffix": "" } ], "year": 1992, "venue": "Speech and Natural Language: Proceedings of a Workshop Held at", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Douglas B Paul and Janet Baker. 1992. The design for the wall street journal-based csr corpus. In Speech and Natural Language: Proceedings of a Workshop Held at Harriman, New York, February 23-26, 1992.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Meta-learning with latent embedding optimization", "authors": [ { "first": "Dushyant", "middle": [], "last": "Andrei A Rusu", "suffix": "" }, { "first": "Jakub", "middle": [], "last": "Rao", "suffix": "" }, { "first": "Oriol", "middle": [], "last": "Sygnowski", "suffix": "" }, { "first": "Razvan", "middle": [], "last": "Vinyals", "suffix": "" }, { "first": "Simon", "middle": [], "last": "Pascanu", "suffix": "" }, { "first": "Raia", "middle": [], "last": "Osindero", "suffix": "" }, { "first": "", "middle": [], "last": "Hadsell", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1807.05960" ] }, "num": null, "urls": [], "raw_text": "Andrei A Rusu, Dushyant Rao, Jakub Sygnowski, Oriol Vinyals, Razvan Pascanu, Simon Osindero, and Raia Hadsell. 2018. Meta-learning with latent embedding optimization. arXiv preprint arXiv:1807.05960.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Meta-transfer learning for few-shot learning", "authors": [ { "first": "Qianru", "middle": [], "last": "Sun", "suffix": "" }, { "first": "Yaoyao", "middle": [], "last": "Liu", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition", "volume": "", "issue": "", "pages": "403--412", "other_ids": {}, "num": null, "urls": [], "raw_text": "Qianru Sun, Yaoyao Liu, Tat-Seng Chua, and Bernt Schiele. 2019. Meta-transfer learning for few-shot learning. In Proceedings of the IEEE/CVF Confer- ence on Computer Vision and Pattern Recognition, pages 403-412.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "An investigation of deep neural networks for multilingual speech recognition training and adaptation", "authors": [ { "first": "Sibo", "middle": [], "last": "Tong", "suffix": "" }, { "first": "N", "middle": [], "last": "Philip", "suffix": "" }, { "first": "Herv\u00e9", "middle": [], "last": "Garner", "suffix": "" }, { "first": "", "middle": [], "last": "Bourlard", "suffix": "" } ], "year": 2017, "venue": "Proc. of INTERSPEECH, CONF", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sibo Tong, Philip N Garner, and Herv\u00e9 Bourlard. 2017. An investigation of deep neural networks for mul- tilingual speech recognition training and adaptation. In Proc. of INTERSPEECH, CONF.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Sudo rm-rf: Efficient networks for universal audio source separation", "authors": [ { "first": "Efthymios", "middle": [], "last": "Tzinis", "suffix": "" }, { "first": "Zhepei", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Paris", "middle": [], "last": "Smaragdis", "suffix": "" } ], "year": 2020, "venue": "2020 IEEE 30th International Workshop on Machine Learning for Signal Processing", "volume": "", "issue": "", "pages": "1--6", "other_ids": {}, "num": null, "urls": [], "raw_text": "Efthymios Tzinis, Zhepei Wang, and Paris Smaragdis. 2020. Sudo rm-rf: Efficient networks for universal audio source separation. In 2020 IEEE 30th Inter- national Workshop on Machine Learning for Signal Processing (MLSP), pages 1-6. IEEE.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Matching networks for one shot learning", "authors": [ { "first": "Oriol", "middle": [], "last": "Vinyals", "suffix": "" }, { "first": "Charles", "middle": [], "last": "Blundell", "suffix": "" }, { "first": "Timothy", "middle": [], "last": "Lillicrap", "suffix": "" }, { "first": "Koray", "middle": [], "last": "Kavukcuoglu", "suffix": "" }, { "first": "Daan", "middle": [], "last": "Wierstra", "suffix": "" } ], "year": 2016, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1606.04080" ] }, "num": null, "urls": [], "raw_text": "Oriol Vinyals, Charles Blundell, Timothy Lillicrap, Ko- ray Kavukcuoglu, and Daan Wierstra. 2016. Match- ing networks for one shot learning. arXiv preprint arXiv:1606.04080.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Meta-transfer learning for code-switched speech recognition", "authors": [ { "first": "Samuel", "middle": [], "last": "Genta Indra Winata", "suffix": "" }, { "first": "Zhaojiang", "middle": [], "last": "Cahyawijaya", "suffix": "" }, { "first": "Zihan", "middle": [], "last": "Lin", "suffix": "" }, { "first": "Peng", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Pascale", "middle": [], "last": "Xu", "suffix": "" }, { "first": "", "middle": [], "last": "Fung", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2004.14228" ] }, "num": null, "urls": [], "raw_text": "Genta Indra Winata, Samuel Cahyawijaya, Zhaojiang Lin, Zihan Liu, Peng Xu, and Pascale Fung. 2020. Meta-transfer learning for code-switched speech recognition. arXiv preprint arXiv:2004.14228.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Learning multilingual meta-embeddings for code-switching named entity recognition", "authors": [ { "first": "Zhaojiang", "middle": [], "last": "Genta Indra Winata", "suffix": "" }, { "first": "Pascale", "middle": [], "last": "Lin", "suffix": "" }, { "first": "", "middle": [], "last": "Fung", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 4th Workshop on Representation Learning for NLP", "volume": "", "issue": "", "pages": "181--186", "other_ids": { "DOI": [ "10.18653/v1/W19-4320" ] }, "num": null, "urls": [], "raw_text": "Genta Indra Winata, Zhaojiang Lin, and Pascale Fung. 2019. Learning multilingual meta-embeddings for code-switching named entity recognition. In Pro- ceedings of the 4th Workshop on Representation Learning for NLP (RepL4NLP-2019), pages 181- 186, Florence, Italy. Association for Computational Linguistics.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "One shot learning for speech separation", "authors": [ { "first": "Yuan-Kuei", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Kuan-Po", "middle": [], "last": "Huang", "suffix": "" }, { "first": "Yu", "middle": [], "last": "Tsao", "suffix": "" }, { "first": "Hung-Yi", "middle": [], "last": "Lee", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2011.10233" ] }, "num": null, "urls": [], "raw_text": "Yuan-Kuei Wu, Kuan-Po Huang, Yu Tsao, and Hung-yi Lee. 2020. One shot learning for speech separation. arXiv preprint arXiv:2011.10233.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Adversarial meta sampling for multilingual low-resource speech recognition", "authors": [ { "first": "Yubei", "middle": [], "last": "Xiao", "suffix": "" }, { "first": "Ke", "middle": [], "last": "Gong", "suffix": "" }, { "first": "Pan", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "Guolin", "middle": [], "last": "Zheng", "suffix": "" }, { "first": "Xiaodan", "middle": [], "last": "Liang", "suffix": "" }, { "first": "Liang", "middle": [], "last": "Lin", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2012.11896" ] }, "num": null, "urls": [], "raw_text": "Yubei Xiao, Ke Gong, Pan Zhou, Guolin Zheng, Xi- aodan Liang, and Liang Lin. 2020. Adversarial meta sampling for multilingual low-resource speech recognition. arXiv preprint arXiv:2012.11896.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Wavesplit: End-to-end speech separation by speaker clustering", "authors": [ { "first": "Neil", "middle": [], "last": "Zeghidour", "suffix": "" }, { "first": "David", "middle": [], "last": "Grangier", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2002.08933" ] }, "num": null, "urls": [], "raw_text": "Neil Zeghidour and David Grangier. 2020. Wavesplit: End-to-end speech separation by speaker clustering. arXiv preprint arXiv:2002.08933.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Transfer learning for lowresource neural machine translation", "authors": [ { "first": "Barret", "middle": [], "last": "Zoph", "suffix": "" }, { "first": "Deniz", "middle": [], "last": "Yuret", "suffix": "" }, { "first": "Jonathan", "middle": [], "last": "May", "suffix": "" }, { "first": "Kevin", "middle": [], "last": "Knight", "suffix": "" } ], "year": 2016, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1604.02201" ] }, "num": null, "urls": [], "raw_text": "Barret Zoph, Deniz Yuret, Jonathan May, and Kevin Knight. 2016. Transfer learning for low- resource neural machine translation. arXiv preprint arXiv:1604.02201.", "links": null } }, "ref_entries": { "FIGREF0": { "type_str": "figure", "num": null, "uris": null, "text": "Figure 1: Illustration of joint training and meta-learning for multi-accent speech separation. The oval area is the accent task sets. Each accent task set contains multiple meta tasks. The solid lines are the pretraining process, joint training on the left, and meta-learning on the right. The dashed lines represent the adaptation paths from parameters \u03b8 to the unseen accents of unseen speakers. This figure is modified from Gu et al. (2018) and our previous work Wu et al. (2020)." }, "FIGREF3": { "type_str": "figure", "num": null, "uris": null, "text": "Evaluation results of each testing accent task set for model (b) and (d) in table 1." }, "TABREF0": { "content": "
methodfine-tune test w/o noise test w/ noise
(a) (b)Joint Trainingbefore after8.40 \u00b1 2.25 8.52 \u00b1 2.206.67 \u00b1 2.10 6.89 \u00b1 1.84
(c) (d)FOMAMLbefore after8.45 \u00b1 3.19 10.13 \u00b1 2.126.66 \u00b1 2.59 8.19 \u00b1 1.62
(e) (f)MAMLbefore after-6.19 \u00b1 1.38 -6.85 \u00b1 1.31 10.11 \u00b1 1.86 8.26 \u00b1 1.52
Table 1:
, we can see that model (b) performs better
", "num": null, "text": "Evaluation results of joint training and MAML methods on the testing accent task sets with and without noise. The two numbers in a cell denote the average Si-SNRi of all the testing tasks and the standard deviation of all the testing accent task sets.", "html": null, "type_str": "table" } } } }