{ "paper_id": "2020", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T15:34:40.683571Z" }, "title": "On the Discrepancy between Density Estimation and Sequence Generation", "authors": [ { "first": "Jason", "middle": [], "last": "Lee", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Dustin", "middle": [], "last": "Tran", "suffix": "", "affiliation": {}, "email": "trandustin@google.com" }, { "first": "Orhan", "middle": [], "last": "Firat", "suffix": "", "affiliation": {}, "email": "orhanf@google.com" }, { "first": "Kyunghyun", "middle": [], "last": "Cho", "suffix": "", "affiliation": {}, "email": "kyunghyun.cho@nyu.edu" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Many sequence-to-sequence generation tasks, including machine translation and text-tospeech, can be posed as estimating the density of the output y given the input x: p(y|x). Given this interpretation, it is natural to evaluate sequence-to-sequence models using conditional log-likelihood on a test set. However, the goal of sequence-to-sequence generation (or structured prediction) is to find the best output\u0177 given an input x, and each task has its own downstream metric R that scores a model output by comparing against a set of references y * : R(\u0177, y * |x). While we hope that a model that excels in density estimation also performs well on the downstream metric, the exact correlation has not been studied for sequence generation tasks. In this paper, by comparing several density estimators on five machine translation tasks, we find that the correlation between rankings of models based on log-likelihood and BLEU varies significantly depending on the range of the model families being compared. First, log-likelihood is highly correlated with BLEU when we consider models within the same family (e.g. autoregressive models, or latent variable models with the same parameterization of the prior). However, we observe no correlation between rankings of models across different families: (1) among non-autoregressive latent variable models, a flexible prior distribution is better at density estimation but gives worse generation quality than a simple prior, and (2) autoregressive models offer the best translation performance overall, while latent variable models with a normalizing flow prior give the highest held-out log-likelihood across all datasets.", "pdf_parse": { "paper_id": "2020", "_pdf_hash": "", "abstract": [ { "text": "Many sequence-to-sequence generation tasks, including machine translation and text-tospeech, can be posed as estimating the density of the output y given the input x: p(y|x). Given this interpretation, it is natural to evaluate sequence-to-sequence models using conditional log-likelihood on a test set. However, the goal of sequence-to-sequence generation (or structured prediction) is to find the best output\u0177 given an input x, and each task has its own downstream metric R that scores a model output by comparing against a set of references y * : R(\u0177, y * |x). While we hope that a model that excels in density estimation also performs well on the downstream metric, the exact correlation has not been studied for sequence generation tasks. In this paper, by comparing several density estimators on five machine translation tasks, we find that the correlation between rankings of models based on log-likelihood and BLEU varies significantly depending on the range of the model families being compared. First, log-likelihood is highly correlated with BLEU when we consider models within the same family (e.g. autoregressive models, or latent variable models with the same parameterization of the prior). However, we observe no correlation between rankings of models across different families: (1) among non-autoregressive latent variable models, a flexible prior distribution is better at density estimation but gives worse generation quality than a simple prior, and (2) autoregressive models offer the best translation performance overall, while latent variable models with a normalizing flow prior give the highest held-out log-likelihood across all datasets.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Sequence-to-sequence generation tasks can be cast as conditional density estimation p(y|x) where x and y are input and output sequences. In this framework, density estimators are trained to maximize the conditional log-likelihood, and also evaluated using log-likelihood on a test set. However, many sequence generation tasks require finding the best output\u0177 given an input x at test time, and the output is evaluated against a set of references y * on a task-specific metric: R(\u0177, y * |x). For example, machine translation systems are evaluated using BLEU scores (Papineni et al., 2002) , image captioning systems use METEOR (Banerjee and Lavie, 2005 ) and text-to-speech systems use MOS (mean opinion scores). As density estimators are optimized on log-likelihood, we want models with higher held-out log-likelihoods to give better generation quality, but the correlation has not been well studied for sequence generation tasks. In this work, we investigate the correlation between rankings of density estimators based on (1) test log-likelihood and (2) the downstream metric for machine translation.", "cite_spans": [ { "start": 564, "end": 587, "text": "(Papineni et al., 2002)", "ref_id": "BIBREF34" }, { "start": 626, "end": 651, "text": "(Banerjee and Lavie, 2005", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "On five language pairs from three machine translation datasets (WMT'14 En\u2194De, WMT'16 En\u2194Ro, IWSLT'16 De\u2192En), we compare the held-out log-likelihood and BLEU scores of several density estimators: (1) autoregressive models (Vaswani et al., 2017) , (2) latent variable models with a non-autoregressive decoder and a simple (diagonal Gaussian) prior (Shu et al., 2019) , and (3) latent variable models with a non-autoregressive decoder and a flexible (normalizing flow) prior (Ma et al., 2019) .", "cite_spans": [ { "start": 221, "end": 243, "text": "(Vaswani et al., 2017)", "ref_id": "BIBREF50" }, { "start": 346, "end": 364, "text": "(Shu et al., 2019)", "ref_id": "BIBREF44" }, { "start": 472, "end": 489, "text": "(Ma et al., 2019)", "ref_id": "BIBREF27" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We present two key observations. First, among models within the same family, we find that loglikelihood is strongly correlated with BLEU. The correlation is almost perfect for autoregressive models and high for latent variable models with the same prior. Between models of different families, however, log-likelihood and BLEU are not correlated. Latent variable models with a flow prior are in fact the best density estimators (even better than autoregressive models), but they give the worst generation quality. Gaussian prior models offer comparable or better BLEU scores, while autoregressive models give the best BLEU scores overall. From these findings, we conclude that the correlation between log-likelihood and BLEU scores varies significantly depending on the range of model families considered.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Second, we find that knowledge distillation drastically hurts density estimation performance across different models and datasets, but consistently improves translation quality of non-autoregressive models. For autoregressive models, distillation slightly hurts translation quality. Among latentvariable models, iterative inference with a delta posterior (Shu et al., 2019 ) significantly improves the translation quality of latent variable models with a Gaussian prior, whereas the improvement is relatively small for the flow prior. Overall, for fast generation, we recommend a latent variable nonautoregressive model using a simple prior (rather than a flexible one), knowledge distillation, and iterative inference. This is 5-7x faster than the autoregressive model at the expense of 2 BLEU scores on average, and it improves upon latent variable models with a flexible prior across generation speed, BLEU, and parameter count.", "cite_spans": [ { "start": 355, "end": 372, "text": "(Shu et al., 2019", "ref_id": "BIBREF44" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Sequence-to-sequence generation is a supervised learning problem of generating an output sequence given an input sequence. For many such tasks, conditional density estimators have been very successful (Sutskever et al., 2014; Bahdanau et al., 2015; Vinyals and Le, 2015) .", "cite_spans": [ { "start": 201, "end": 225, "text": "(Sutskever et al., 2014;", "ref_id": "BIBREF46" }, { "start": 226, "end": 248, "text": "Bahdanau et al., 2015;", "ref_id": "BIBREF3" }, { "start": 249, "end": 270, "text": "Vinyals and Le, 2015)", "ref_id": "BIBREF51" } ], "ref_spans": [], "eq_spans": [], "section": "Background", "sec_num": "2" }, { "text": "To learn the distribution of an output sequence, it is crucial to give enough capacity to the model to be able to capture the dependencies among the output variables. We explore two ways to achieve this: (1) directly modeling the dependencies with an autoregressive factorization of the variables, and (2) letting latent variables capture the dependencies, so the distribution of the output sequence can be factorized given the latent variables and therefore more quickly be generated. We discuss both classes of density estimators in depth below. We denote the training set as a set of tuples {(x n , y n )} N n=1 and each input and output example as sequences of random variables x = {x 1 , . . . , x T } and y = {y 1 , . . . , y T } (where we drop the subscript n for notational simplicity). We use \u03b8 to denote the model parameters.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Background", "sec_num": "2" }, { "text": "Learning Autoregressive models factorize the joint distribution of the sequence of output variables y = {y 1 , . . . , y T } as a product of conditional distributions:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Autoregressive Models", "sec_num": "2.1" }, { "text": "log p AR (y|x) = T t=1 log p \u03b8 (y t |y tokens are appended to the target sentence until its length is a multiple of 4.", "cite_spans": [ { "start": 96, "end": 97, "text": "4", "ref_id": null }, { "start": 136, "end": 165, "text": "(Schuster and Nakajima, 2012)", "ref_id": "BIBREF41" }, { "start": 231, "end": 253, "text": "Sennrich et al. (2016)", "ref_id": "BIBREF42" }, { "start": 478, "end": 495, "text": "(Ma et al., 2019)", "ref_id": "BIBREF27" } ], "ref_spans": [], "eq_spans": [], "section": "Datasets and Preprocessing", "sec_num": "4.1" }, { "text": "We use three Transformer (Vaswani et al., 2017) ", "cite_spans": [ { "start": 25, "end": 47, "text": "(Vaswani et al., 2017)", "ref_id": "BIBREF50" } ], "ref_spans": [], "eq_spans": [], "section": "Autoregressive Models", "sec_num": "4.2" }, { "text": "The latent variable models in our experiments are composed of the source sentence encoder, length predictor, prior, decoder and posterior. The source sentence encoder is implemented with a standard Transformer encoder. Given the hidden states of the source sentence, the length predictor (a 2-layer MLP) predicts the length difference between the source and target sentences as a categorical distribution in [\u221230, 30] . We implement the decoder p \u03b8 (y|z, x) with a standard Transformer decoder that outputs the logits of all target tokens in parallel. The approximate posterior q \u03c6 (z|y, x) is implemented as a Transformer decoder with a final Linear layer with weight normalization (Salimans and Kingma, 2016) to output the mean and standard deviation (having dimensionality d latent ). Both the decoder and the approximate posterior attend to the source hidden states.", "cite_spans": [ { "start": 408, "end": 417, "text": "[\u221230, 30]", "ref_id": null }, { "start": 683, "end": 710, "text": "(Salimans and Kingma, 2016)", "ref_id": "BIBREF40" } ], "ref_spans": [], "eq_spans": [], "section": "Latent Variable Models", "sec_num": "4.3" }, { "text": "Diagonal Gaussian Prior The diagonal Gaussian prior is implemented with a Transformer decoder which receives a sequence of positional encodings of length T as input, and outputs the mean and standard deviation of each target token (of dimensionality d latent ). We train two models of different sizes: Gauss-base (Ga-B) and Gauss-large (Ga-L). Gauss-base has 4 attention heads, 3 posterior layers, 3 decoder layers and 6 encoder layers, whereas Gauss-large has 8 attention heads, 4 posterior layers, 6 decoder layers, 6 encoder layers. 512, 512, 2048) for WMT experiments and (256, 256, 1024) for IWSLT experiments.", "cite_spans": [ { "start": 536, "end": 540, "text": "512,", "ref_id": null }, { "start": 541, "end": 545, "text": "512,", "ref_id": null }, { "start": 546, "end": 551, "text": "2048)", "ref_id": null }, { "start": 556, "end": 592, "text": "WMT experiments and (256, 256, 1024)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Latent Variable Models", "sec_num": "4.3" }, { "text": "(d model , d latent , d filter ) is (", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Latent Variable Models", "sec_num": "4.3" }, { "text": "Normalizing Flow Prior The flow prior is implemented with Glow (Kingma and Dhariwal, 2018). We use a single Transformer decoder layer with a final Linear layer with weight normalization to parameterize g param in Eq. 4. This produces the shift and scale parameters for the affine transformation. Our flow prior has the multi-scale architecture with three levels (Dinh et al., 2017) : at the end of each level, half of the latent variables are modeled with a standard Gaussian distribution. We use three split patterns and multi-headed 1x1 convolution from Ma et al. (2019) . We experiment with the following hyperparameter settings: Flow-small (Fl-S) with 12/12/8 flow layers in each level and Flowbase (Fl-B) with 12/24/16 flow layers in each level. The first level corresponds to the latent distribution and the last level corresponds to the base distribution. (d model , d latent , d filter ) is (320, 320, 640) for all experiments. For the Transformer decoder in g param , we use 4 attention heads for Flow-small and 8 attention heads for Flow-base.", "cite_spans": [ { "start": 362, "end": 381, "text": "(Dinh et al., 2017)", "ref_id": "BIBREF9" }, { "start": 556, "end": 572, "text": "Ma et al. (2019)", "ref_id": "BIBREF27" } ], "ref_spans": [ { "start": 863, "end": 895, "text": "(d model , d latent , d filter )", "ref_id": null } ], "eq_spans": [], "section": "Latent Variable Models", "sec_num": "4.3" }, { "text": "We use the Adam optimizer (Kingma and Ba, 2015) with the learning rate schedule used by Vaswani et al. (2017) . The norm of the gradients is clipped at 1.0. We perform early stopping and choose the learning rate warmup steps and dropout rate based on the BLEU score on the development set. To train non-autoregressive models, the loss from the length predictor is minimized jointly with negative ELBO loss.", "cite_spans": [ { "start": 88, "end": 109, "text": "Vaswani et al. (2017)", "ref_id": "BIBREF50" } ], "ref_spans": [], "eq_spans": [], "section": "Training and Optimization", "sec_num": "4.4" }, { "text": "Knowledge Distillation Following previous work (Kim and Rush, 2016; Gu et al., 2018; Lee et al., 2018) , we construct a distilled dataset by decoding the training set using Transformer-base with beam width 4. For IWSLT'16 De\u2192En, we use Transformer-small.", "cite_spans": [ { "start": 47, "end": 67, "text": "(Kim and Rush, 2016;", "ref_id": "BIBREF20" }, { "start": 68, "end": 84, "text": "Gu et al., 2018;", "ref_id": "BIBREF13" }, { "start": 85, "end": 102, "text": "Lee et al., 2018)", "ref_id": "BIBREF25" } ], "ref_spans": [], "eq_spans": [], "section": "Training and Optimization", "sec_num": "4.4" }, { "text": "To ease optimization of latent variable models (Bowman et al., 2016; Higgins et al., 2017) , we set the weight of the KL term to 0 for the first 5,000 SGD steps and linearly increase it to 1 over the next 20,000 steps. Similarly with Mansimov et al. 2019, we find it helpful to add a small regularization term to the training objective that matches the approximate posterior with a standard Gaussian distribution: \u03b1 \u2022 KL q \u03c6 (z|y, x) || N (0, I) , as the original KL term KL q \u03c6 (z|y, x) p \u03b8 (z|x) does not have a local point minimum but a valley of minima. We find \u03b1 = 10 \u22124 to work best.", "cite_spans": [ { "start": 47, "end": 68, "text": "(Bowman et al., 2016;", "ref_id": "BIBREF6" }, { "start": 69, "end": 90, "text": "Higgins et al., 2017)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Latent Variable Models", "sec_num": null }, { "text": "We perform data-dependent initialization of actnorm parameters for the flow prior (Kingma and Dhariwal, 2018) at the 5,000-th step, which is at the beginning of KL scheduling.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Flow Prior Models", "sec_num": null }, { "text": "Log-likelihood is the main metric for measuring density estimation (data modeling) performance. We compute exact log-likelihood for autoregressive models. For latent variable models, we estimate the marginal log-likelihood by importance sampling with 1K samples from the approximate posterior and using the ground truth target length.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation Metrics", "sec_num": "4.5" }, { "text": "BLEU measures the similarity (in terms of ngram overlap) between a generated output and a set of references, regardless of the model. It is a standard metric for generation quality of machine translation systems.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation Metrics", "sec_num": "4.5" }, { "text": "Generation Speed In addition to the qualitydriven metrics, we measure the generation speed of each model in the number of sentences generated per second on a single V100 GPU. Gauss-base. Ga-L: Gauss-large. Fl-S: Flow-small. Fl-B: Flow-base. Fl-L: Flow-large. We use beam search with width 4 for inference with autoregressive models, and one step of iterative inference (Shu et al., 2019) for latent variable models. On most datasets, our Flowbase model gives comparable results to those from Ma et al. (2019) , which are denoted with ( * ). We boldface the best log-likelihood overall and the best BLEU score among the latent variable models. We underscore best BLEU score among the autoregressive models. and another from models trained on distilled data (Dist.) (which we mostly discuss in \u00a75.2). We use the original test set in computing the log-likelihood and BLEU scores of the distilled models, so the results are comparable with the undistilled models. We make two main observations: 1. Log-likelihood is highly correlated with BLEU when considering models within the same family. (a) Among autoregressive models (Tr-S, Tr-B", "cite_spans": [ { "start": 369, "end": 387, "text": "(Shu et al., 2019)", "ref_id": "BIBREF44" }, { "start": 492, "end": 508, "text": "Ma et al. (2019)", "ref_id": "BIBREF27" } ], "ref_spans": [], "eq_spans": [], "section": "Evaluation Metrics", "sec_num": "4.5" }, { "text": "and Tr-L), there is a perfect correlation between log-likelihood and BLEU. On all five language pairs (undistilled), the rankings of autoregressive models based on loglikelihood and BLEU are identical.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Correlation between rankings of models", "sec_num": "5.1" }, { "text": "(b) Among non-autoregressive latent variable models with the same prior distribution, there is a strong but not perfect correlation. Between Gauss-large and Gauss-base, the model with higher held-out log-likelihood also gives higher BLEU on four out of five datasets. Similarly, Flow-base gives higher log-likelihood and BLEU score than Flow-small on all datasets except WMT'14 De\u2192En.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Correlation between rankings of models", "sec_num": "5.1" }, { "text": "2. Log-likelihood is not correlated with BLEU when comparing models from different families. (a) Between latent variable models with different prior distributions, we observe no correlation between log-likelihood and BLEU. On four out of five language pairs (undistilled), Flow-base gives much higher log-likelihood but similar or worse BLEU score than Gaussbase. With distillation, Gauss-large considerably outperforms Flow-base in BLEU on all datasets, while Flow-base gives better loglikelihood.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Correlation between rankings of models", "sec_num": "5.1" }, { "text": "(b) Overall, autoregressive models offer the best translation quality but not the best modeling performance. In fact, Flow-base model with a non-autoregressive decoder gives the highest held-out log-likelihood on all datasets.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Correlation between rankings of models", "sec_num": "5.1" }, { "text": "Correlation between log-likelihood and BLEU across checkpoints Table 2 presents the correlation between log-likelihood and BLEU across the training checkpoints of several models. The findings are similar to Table 1 : for Transformerbase, there is almost perfect correlation (0.926) across the checkpoints. For Gauss-base and Flowbase, we observe strong but not perfect correlation (0.831 and 0.678). Overall, these findings suggest that there is a high correlation between log-likelihood and BLEU when comparing models within the same family. We discuss the correlation for models trained with distillation below in \u00a75.2.", "cite_spans": [], "ref_spans": [ { "start": 63, "end": 70, "text": "Table 2", "ref_id": "TABREF3" }, { "start": 207, "end": 214, "text": "Table 1", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Correlation between rankings of models", "sec_num": "5.1" }, { "text": "In Table 2 , we observe a strong negative correlation between log-likelihood and BLEU across the training checkpoints of several density estimators trained with distillation. Indeed, distillation severely hurts density estimation performance on all datasets (see Table 1 ). In terms of generation quality, it consistently improves non-autoregressive models, yet the amount of improvement varies across models and datasets. On WMT'14 En\u2192De and WMT'14 De\u2192En, distillation gives a significant 7-9 BLEU increase for diagonal Gaussian prior models, but the improvement is relatively smaller on other datasets. Flow prior models benefit less from distillation, only 3-4 BLEU scores on WMT'14 En\u2194De and less on other datasets. For autoregressive models, distillation results in a slight decrease in generation performance.", "cite_spans": [], "ref_spans": [ { "start": 3, "end": 10, "text": "Table 2", "ref_id": "TABREF3" }, { "start": 263, "end": 270, "text": "Table 1", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Knowledge Distillation", "sec_num": "5.2" }, { "text": "We analyze the effect of iterative inference on the Gaussian and the flow prior models. Table 3 shows that iterative refinement improves BLEU and ELBO for both Gaussian prior and flow prior models, but the gain is relatively smaller for the flow prior model.", "cite_spans": [], "ref_spans": [ { "start": 88, "end": 95, "text": "Table 3", "ref_id": "TABREF5" } ], "eq_spans": [], "section": "Iterative inference on Gaussian vs. flow prior", "sec_num": "5.3" }, { "text": "In Figure 1 , we visualize the latent space of the approximate prior, the prior and the delta posterior of the latent variable models using t-SNE (van der Maaten, 2014). It is clear from the figures that the delta posterior of Gauss-base has high overlap with the approximate posterior, while the overlap is relatively low for Flow-small. We conjecture that while the loss surface of ELBO contains many local optima that we can reach via iterative refinement, not all of them share the support of the approximate posterior density (hence correspond to data). This is particularly pronounced for the flow prior model.", "cite_spans": [], "ref_spans": [ { "start": 3, "end": 11, "text": "Figure 1", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Visualization of latent space", "sec_num": null }, { "text": "We compare performance, generation speed and size of various models in Table 4 . While autoregressive models offer the best translation quality, inference is inherently sequential and slow. Decoding from non-autoregressive latent variable models is much more efficient, and requires constant time with respect to sequence length given parallel computation. Compared to Transformer-base, Gausslarge with 1 step of iterative inference improves generation speed by 6x, at the cost of 2.6 BLEU. On WMT'14 De\u2192En, the performance degradation is 1.9 BLEU. Flow prior models perform much worse than the Gaussian prior models despite having more parameters and slower generation speed. Table 4 : BLEU score, generation speed and size of various models on WMT'14 En\u2192De test set. We measure generation speed in sentence/s on a single V100 GPU with batch size 1. We perform inference of autoregressive models using beam search with width 4. For latent variable models, we train perform k steps of iterative inference (Shu et al., 2019 ) (where k \u2208 {0, 1, 2, 4, 8}) and report results from models trained with distillation. ( * ) results are from Ma et al. (2019) .", "cite_spans": [ { "start": 1005, "end": 1022, "text": "(Shu et al., 2019", "ref_id": "BIBREF44" }, { "start": 1134, "end": 1150, "text": "Ma et al. (2019)", "ref_id": "BIBREF27" } ], "ref_spans": [ { "start": 71, "end": 78, "text": "Table 4", "ref_id": null }, { "start": 677, "end": 684, "text": "Table 4", "ref_id": null } ], "eq_spans": [], "section": "Generation speed and model size", "sec_num": "5.4" }, { "text": "For sequence generation, the gap between loglikelihood and downstream metric has long been recognized. To address this discrepancy between density estimation and approximate inference (generation), there has largely been two lines of prior work: (1) structured perceptron training for conditional random fields (Lafferty et al., 2001; Collins, 2002; Liang et al., 2006) and (2) empirical risk minimization with approximate inference (Valtchev et al., 1997; Povey and Woodland, 2002; Och, 2003; Qiang Fu and Biing-Hwang Juang, 2007; Stoyanov et al., 2011; Hopkins and May, 2011; Shen et al., 2016) . More recent work proposed to train neural sequence models directly on task-specific losses using reinforcement learning (Ranzato et al., 2016; Bahdanau et al., 2017; Jaques et al., 2017) or adversarial training (Goyal et al., 2016) . Despite such a plethora of work in bridging the gap between log-likelihood and the downstream task, the exact correlation between the two has not been established well. Our work investigates the correlation for neural sequence models (autoregressive models and latent variable models) in machine translation. Among autoregressive models for open-domain dialogue, a concurrent work (Adiwardana et al., 2020) found a strong correlation between perplexity and a human evaluation metric that awards sensibleness and specificity. This work confirms a part of our finding that log-likelihood is highly correlated with the downstream metric when we consider models within the same family.", "cite_spans": [ { "start": 311, "end": 334, "text": "(Lafferty et al., 2001;", "ref_id": "BIBREF24" }, { "start": 335, "end": 349, "text": "Collins, 2002;", "ref_id": "BIBREF8" }, { "start": 350, "end": 369, "text": "Liang et al., 2006)", "ref_id": "BIBREF26" }, { "start": 433, "end": 456, "text": "(Valtchev et al., 1997;", "ref_id": "BIBREF49" }, { "start": 457, "end": 482, "text": "Povey and Woodland, 2002;", "ref_id": "BIBREF35" }, { "start": 483, "end": 493, "text": "Och, 2003;", "ref_id": "BIBREF30" }, { "start": 494, "end": 531, "text": "Qiang Fu and Biing-Hwang Juang, 2007;", "ref_id": "BIBREF36" }, { "start": 532, "end": 554, "text": "Stoyanov et al., 2011;", "ref_id": "BIBREF45" }, { "start": 555, "end": 577, "text": "Hopkins and May, 2011;", "ref_id": "BIBREF17" }, { "start": 578, "end": 596, "text": "Shen et al., 2016)", "ref_id": "BIBREF43" }, { "start": 719, "end": 741, "text": "(Ranzato et al., 2016;", "ref_id": "BIBREF37" }, { "start": 742, "end": 764, "text": "Bahdanau et al., 2017;", "ref_id": "BIBREF18" }, { "start": 765, "end": 785, "text": "Jaques et al., 2017)", "ref_id": "BIBREF18" }, { "start": 810, "end": 830, "text": "(Goyal et al., 2016)", "ref_id": "BIBREF12" }, { "start": 1214, "end": 1239, "text": "(Adiwardana et al., 2020)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "6" }, { "text": "Our work is inspired by recent work on latent variable models for non-autoregressive neural machine translation (Gu et al., 2018; Lee et al., 2018; Kaiser et al., 2018) . Specifically, we compare continuous latent variable models with a diagonal Gaussian prior (Shu et al., 2019) and a normalizing flow prior (Ma et al., 2019) . We find that while having an expressive prior is beneficial for density estimation, a simple prior delivers better generation quality while being smaller and faster.", "cite_spans": [ { "start": 112, "end": 129, "text": "(Gu et al., 2018;", "ref_id": "BIBREF13" }, { "start": 130, "end": 147, "text": "Lee et al., 2018;", "ref_id": "BIBREF25" }, { "start": 148, "end": 168, "text": "Kaiser et al., 2018)", "ref_id": "BIBREF19" }, { "start": 261, "end": 279, "text": "(Shu et al., 2019)", "ref_id": "BIBREF44" }, { "start": 309, "end": 326, "text": "(Ma et al., 2019)", "ref_id": "BIBREF27" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "6" }, { "text": "In this work, we investigate the correlation between log-likelihood and the downstream evaluation metric for machine translation. We train several autoregressive models and latent variable models on five language pairs from three machine translation datasets (WMT'14 En\u2194De, WMT'16 En\u2194Ro and IWSLT'16 De\u2192En), and find that the correlation between log-likelihood and BLEU changes drastically depending on the range of model families being compared: Among the models within the same family, log-likelihood is highly correlated with BLEU. Between models of different families, however, we observe no correlation: the flow prior model gives higher held-out log-likelihood but similar or worse BLEU score than the Gaussian prior model. Furthermore, autoregressive models give the highest BLEU scores overall but the latent variable model with a flow prior gives the highest test log-likelihoods on all datasets.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "7" }, { "text": "In the future, we will investigate the factors behind this discrepancy. One possibility is the inherent difficulty of inference for latent variable models, which might be resolved by designing better inference algorithms. We will also explore if the discrepancy is mainly caused by the difference in the decoding distribution (autoregressive vs. factorized) or the training objective (maximum likelihood vs. ELBO).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "7" }, { "text": "https://wit3.fbk.eu/", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "www.statmt.org/wmt16/translation-task. html 3 www.statmt.org/wmt14/translation-task. html 4 https://github.com/tensorflow/ tensor2tensor/blob/master/tensor2tensor/ bin/t2t-datagen", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "We thank our colleagues at the Google Translate and Brain teams, particularly Durk Kingma, Yu Zhang, Yuan Cao and Julia Kreutzer for their feedback on the draft. JL thanks Chunting Zhou, Manoj Kumar and William Chan for helpful discussions.KC is supported by Samsung Advanced Institute of Technology (Next Generation Deep Learning: from pattern recognition to AI), Samsung Research (Improving Deep Learning using Latent Structure) and NSF Award 1922658 NRT-HDR: FUTURE Foundations, Translation, and Responsibility for Data Science. KC thanks CIFAR, eBay, Naver and NVIDIA for their support.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgements", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Towards a human-like opendomain chatbot", "authors": [ { "first": "Daniel", "middle": [], "last": "Adiwardana", "suffix": "" }, { "first": "Minh-Thang", "middle": [], "last": "Luong", "suffix": "" }, { "first": "David", "middle": [ "R" ], "last": "So", "suffix": "" }, { "first": "Jamie", "middle": [], "last": "Hall", "suffix": "" }, { "first": "Noah", "middle": [], "last": "Fiedel", "suffix": "" }, { "first": "Romal", "middle": [], "last": "Thoppilan", "suffix": "" }, { "first": "Zi", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Apoorv", "middle": [], "last": "Kulshreshtha", "suffix": "" }, { "first": "Gaurav", "middle": [], "last": "Nemade", "suffix": "" }, { "first": "Yifeng", "middle": [], "last": "Lu", "suffix": "" }, { "first": "V", "middle": [], "last": "Quoc", "suffix": "" }, { "first": "", "middle": [], "last": "Le", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Daniel Adiwardana, Minh-Thang Luong, David R. So, Jamie Hall, Noah Fiedel, Romal Thoppilan, Zi Yang, Apoorv Kulshreshtha, Gaurav Nemade, Yifeng Lu, and Quoc V. Le. 2020. Towards a human-like open- domain chatbot. arXiv preprint arxiv:2001.09977.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "An actor-critic algorithm for sequence prediction", "authors": [ { "first": "Yoshua", "middle": [], "last": "Courville", "suffix": "" }, { "first": "", "middle": [], "last": "Bengio", "suffix": "" } ], "year": 2017, "venue": "5th International Conference on Learning Representations, ICLR", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Courville, and Yoshua Bengio. 2017. An actor-critic algorithm for sequence prediction. In 5th Inter- national Conference on Learning Representations, ICLR.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Neural machine translation by jointly learning to align and translate", "authors": [ { "first": "Dzmitry", "middle": [], "last": "Bahdanau", "suffix": "" }, { "first": "Kyunghyun", "middle": [], "last": "Cho", "suffix": "" }, { "first": "Yoshua", "middle": [], "last": "Bengio", "suffix": "" } ], "year": 2015, "venue": "3rd International Conference on Learning Representations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Ben- gio. 2015. Neural machine translation by jointly learning to align and translate. In 3rd International Conference on Learning Representations, ICLR.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Meteor: An automatic metric for mt evaluation with improved correlation with human judgments", "authors": [ { "first": "Satanjeev", "middle": [], "last": "Banerjee", "suffix": "" }, { "first": "Alon", "middle": [], "last": "Lavie", "suffix": "" } ], "year": 2005, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Satanjeev Banerjee and Alon Lavie. 2005. Meteor: An automatic metric for mt evaluation with improved correlation with human judgments.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Resampled priors for variational autoencoders", "authors": [ { "first": "Matthias", "middle": [], "last": "Bauer", "suffix": "" }, { "first": "Andriy", "middle": [], "last": "Mnih", "suffix": "" } ], "year": 2019, "venue": "The 22nd International Conference on Artificial Intelligence and Statistics", "volume": "", "issue": "", "pages": "66--75", "other_ids": {}, "num": null, "urls": [], "raw_text": "Matthias Bauer and Andriy Mnih. 2019. Resampled priors for variational autoencoders. In The 22nd In- ternational Conference on Artificial Intelligence and Statistics, AISTATS, pages 66-75.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Generating sentences from a continuous space", "authors": [ { "first": "R", "middle": [], "last": "Samuel", "suffix": "" }, { "first": "Luke", "middle": [], "last": "Bowman", "suffix": "" }, { "first": "Oriol", "middle": [], "last": "Vilnis", "suffix": "" }, { "first": "Andrew", "middle": [ "M" ], "last": "Vinyals", "suffix": "" }, { "first": "Rafal", "middle": [], "last": "Dai", "suffix": "" }, { "first": "Samy", "middle": [], "last": "J\u00f3zefowicz", "suffix": "" }, { "first": "", "middle": [], "last": "Bengio", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 20th SIGNLL Conference on Computational Natural Language Learning, CoNLL", "volume": "", "issue": "", "pages": "10--21", "other_ids": {}, "num": null, "urls": [], "raw_text": "Samuel R. Bowman, Luke Vilnis, Oriol Vinyals, An- drew M. Dai, Rafal J\u00f3zefowicz, and Samy Ben- gio. 2016. Generating sentences from a continuous space. In Proceedings of the 20th SIGNLL Confer- ence on Computational Natural Language Learning, CoNLL, pages 10-21.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Empirical evaluation of gated recurrent neural networks on sequence modeling", "authors": [ { "first": "Junyoung", "middle": [], "last": "Chung", "suffix": "" }, { "first": "Kyunghyun", "middle": [], "last": "Aglar G\u00fcl\u00e7ehre", "suffix": "" }, { "first": "Yoshua", "middle": [], "last": "Cho", "suffix": "" }, { "first": "", "middle": [], "last": "Bengio", "suffix": "" } ], "year": 2014, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Junyoung Chung, \u00c7 aglar G\u00fcl\u00e7ehre, KyungHyun Cho, and Yoshua Bengio. 2014. Empirical evaluation of gated recurrent neural networks on sequence model- ing. arXiv preprint arxiv:1412.3555.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Discriminative training methods for hidden Markov models: Theory and experiments with perceptron algorithms", "authors": [ { "first": "Michael", "middle": [], "last": "Collins", "suffix": "" } ], "year": 2002, "venue": "Proceedings of the 2002 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "1--8", "other_ids": {}, "num": null, "urls": [], "raw_text": "Michael Collins. 2002. Discriminative training meth- ods for hidden Markov models: Theory and ex- periments with perceptron algorithms. In Proceed- ings of the 2002 Conference on Empirical Methods in Natural Language Processing (EMNLP 2002), pages 1-8. Association for Computational Linguis- tics.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Density estimation using real NVP", "authors": [ { "first": "Laurent", "middle": [], "last": "Dinh", "suffix": "" }, { "first": "Jascha", "middle": [], "last": "Sohl-Dickstein", "suffix": "" }, { "first": "Samy", "middle": [], "last": "Bengio", "suffix": "" } ], "year": 2017, "venue": "International Conference on Learning Representations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Laurent Dinh, Jascha Sohl-Dickstein, and Samy Ben- gio. 2017. Density estimation using real NVP. In International Conference on Learning Representa- tions.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Finding structure in time", "authors": [ { "first": "Jeffrey", "middle": [ "L" ], "last": "Elman", "suffix": "" } ], "year": 1990, "venue": "Cognitive Science", "volume": "14", "issue": "2", "pages": "179--211", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jeffrey L. Elman. 1990. Finding structure in time. Cog- nitive Science, 14(2):179-211.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Convolutional sequence to sequence learning", "authors": [ { "first": "Jonas", "middle": [], "last": "Gehring", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Auli", "suffix": "" }, { "first": "David", "middle": [], "last": "Grangier", "suffix": "" }, { "first": "Denis", "middle": [], "last": "Yarats", "suffix": "" }, { "first": "Yann", "middle": [ "N" ], "last": "Dauphin", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 34th International Conference on Machine Learning, ICML", "volume": "", "issue": "", "pages": "1243--1252", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jonas Gehring, Michael Auli, David Grangier, De- nis Yarats, and Yann N. Dauphin. 2017. Convolu- tional sequence to sequence learning. In Proceed- ings of the 34th International Conference on Ma- chine Learning, ICML, pages 1243-1252.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Professor forcing: A new algorithm for training recurrent networks", "authors": [ { "first": "Anirudh", "middle": [], "last": "Goyal", "suffix": "" }, { "first": "Alex", "middle": [], "last": "Lamb", "suffix": "" }, { "first": "Ying", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Saizheng", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Aaron", "middle": [ "C" ], "last": "Courville", "suffix": "" }, { "first": "Yoshua", "middle": [], "last": "Bengio", "suffix": "" } ], "year": 2016, "venue": "Advances in Neural Information Processing Systems 29: Annual Conference on Neural Information Processing Systems", "volume": "", "issue": "", "pages": "4601--4609", "other_ids": {}, "num": null, "urls": [], "raw_text": "Anirudh Goyal, Alex Lamb, Ying Zhang, Saizheng Zhang, Aaron C. Courville, and Yoshua Bengio. 2016. Professor forcing: A new algorithm for train- ing recurrent networks. In Advances in Neural Infor- mation Processing Systems 29: Annual Conference on Neural Information Processing Systems, pages 4601-4609.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Nonautoregressive neural machine translation", "authors": [ { "first": "Jiatao", "middle": [], "last": "Gu", "suffix": "" }, { "first": "James", "middle": [], "last": "Bradbury", "suffix": "" }, { "first": "Caiming", "middle": [], "last": "Xiong", "suffix": "" }, { "first": "O", "middle": [ "K" ], "last": "Victor", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Li", "suffix": "" }, { "first": "", "middle": [], "last": "Socher", "suffix": "" } ], "year": 2018, "venue": "6th International Conference on Learning Representations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jiatao Gu, James Bradbury, Caiming Xiong, Vic- tor O. K. Li, and Richard Socher. 2018. Non- autoregressive neural machine translation. In 6th International Conference on Learning Representa- tions, ICLR.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "beta-vae: Learning basic visual concepts with a constrained variational framework", "authors": [ { "first": "Irina", "middle": [], "last": "Higgins", "suffix": "" }, { "first": "Lo\u00efc", "middle": [], "last": "Matthey", "suffix": "" }, { "first": "Arka", "middle": [], "last": "Pal", "suffix": "" }, { "first": "Christopher", "middle": [], "last": "Burgess", "suffix": "" }, { "first": "Xavier", "middle": [], "last": "Glorot", "suffix": "" }, { "first": "Matthew", "middle": [], "last": "Botvinick", "suffix": "" }, { "first": "Shakir", "middle": [], "last": "Mohamed", "suffix": "" }, { "first": "Alexander", "middle": [], "last": "Lerchner", "suffix": "" } ], "year": 2017, "venue": "5th International Conference on Learning Representations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Irina Higgins, Lo\u00efc Matthey, Arka Pal, Christopher Burgess, Xavier Glorot, Matthew Botvinick, Shakir Mohamed, and Alexander Lerchner. 2017. beta-vae: Learning basic visual concepts with a constrained variational framework. In 5th International Confer- ence on Learning Representations, ICLR.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Long short-term memory", "authors": [ { "first": "Sepp", "middle": [], "last": "Hochreiter", "suffix": "" }, { "first": "J\u00fcrgen", "middle": [], "last": "Schmidhuber", "suffix": "" } ], "year": 1997, "venue": "Neural Computation", "volume": "9", "issue": "8", "pages": "1735--1780", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sepp Hochreiter and J\u00fcrgen Schmidhuber. 1997. Long short-term memory. Neural Computation, 9(8):1735-1780.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Elbo surgery: yet another way to carve up the variational evidence lower bound", "authors": [ { "first": "D", "middle": [], "last": "Matthew", "suffix": "" }, { "first": "Matthew J Johnson", "middle": [], "last": "Hoffman", "suffix": "" } ], "year": 2016, "venue": "Advances in Approximate Bayesian Inference", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Matthew D Hoffman and Matthew J Johnson. 2016. Elbo surgery: yet another way to carve up the varia- tional evidence lower bound. Workshop in Advances in Approximate Bayesian Inference, Neurips.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Tuning as ranking", "authors": [ { "first": "Mark", "middle": [], "last": "Hopkins", "suffix": "" }, { "first": "Jonathan", "middle": [], "last": "", "suffix": "" } ], "year": 2011, "venue": "Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "1352--1362", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mark Hopkins and Jonathan May. 2011. Tuning as ranking. In Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing, pages 1352-1362. Association for Computational Linguistics.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Sequence tutor: Conservative fine-tuning of sequence generation models with kl-control", "authors": [ { "first": "Natasha", "middle": [], "last": "Jaques", "suffix": "" }, { "first": "Shixiang", "middle": [], "last": "Gu", "suffix": "" }, { "first": "Dzmitry", "middle": [], "last": "Bahdanau", "suffix": "" }, { "first": "Jos\u00e9", "middle": [], "last": "Miguel Hern\u00e1ndez-Lobato", "suffix": "" }, { "first": "Richard", "middle": [ "E" ], "last": "Turner", "suffix": "" }, { "first": "Douglas", "middle": [], "last": "Eck", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 34th International Conference on Machine Learning, ICML", "volume": "", "issue": "", "pages": "1645--1654", "other_ids": {}, "num": null, "urls": [], "raw_text": "Natasha Jaques, Shixiang Gu, Dzmitry Bahdanau, Jos\u00e9 Miguel Hern\u00e1ndez-Lobato, Richard E. Turner, and Douglas Eck. 2017. Sequence tutor: Conser- vative fine-tuning of sequence generation models with kl-control. In Proceedings of the 34th Inter- national Conference on Machine Learning, ICML, pages 1645-1654.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Fast decoding in sequence models using discrete latent variables", "authors": [ { "first": "Lukasz", "middle": [], "last": "Kaiser", "suffix": "" }, { "first": "Samy", "middle": [], "last": "Bengio", "suffix": "" }, { "first": "Aurko", "middle": [], "last": "Roy", "suffix": "" }, { "first": "Ashish", "middle": [], "last": "Vaswani", "suffix": "" }, { "first": "Niki", "middle": [], "last": "Parmar", "suffix": "" }, { "first": "Jakob", "middle": [], "last": "Uszkoreit", "suffix": "" }, { "first": "Noam", "middle": [], "last": "Shazeer", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 35th International Conference on Machine Learning, ICML", "volume": "", "issue": "", "pages": "2395--2404", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lukasz Kaiser, Samy Bengio, Aurko Roy, Ashish Vaswani, Niki Parmar, Jakob Uszkoreit, and Noam Shazeer. 2018. Fast decoding in sequence models using discrete latent variables. In Proceedings of the 35th International Conference on Machine Learning, ICML, pages 2395-2404.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Sequencelevel knowledge distillation", "authors": [ { "first": "Yoon", "middle": [], "last": "Kim", "suffix": "" }, { "first": "Alexander", "middle": [ "M" ], "last": "Rush", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "1317--1327", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yoon Kim and Alexander M. Rush. 2016. Sequence- level knowledge distillation. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, EMNLP, pages 1317-1327.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Adam: A method for stochastic optimization", "authors": [ { "first": "P", "middle": [], "last": "Diederik", "suffix": "" }, { "first": "Jimmy", "middle": [], "last": "Kingma", "suffix": "" }, { "first": "", "middle": [], "last": "Ba", "suffix": "" } ], "year": 2015, "venue": "3rd International Conference on Learning Representations, ICLR", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In 3rd Inter- national Conference on Learning Representations, ICLR.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Glow: Generative flow with invertible 1x1 convolutions", "authors": [ { "first": "P", "middle": [], "last": "Diederik", "suffix": "" }, { "first": "Prafulla", "middle": [], "last": "Kingma", "suffix": "" }, { "first": "", "middle": [], "last": "Dhariwal", "suffix": "" } ], "year": 2018, "venue": "Advances in Neural Information Processing Systems 31: Annual Conference on Neural Information Processing Systems", "volume": "", "issue": "", "pages": "10236--10245", "other_ids": {}, "num": null, "urls": [], "raw_text": "Diederik P. Kingma and Prafulla Dhariwal. 2018. Glow: Generative flow with invertible 1x1 convolu- tions. In Advances in Neural Information Process- ing Systems 31: Annual Conference on Neural Infor- mation Processing Systems, pages 10236-10245.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Autoencoding variational bayes", "authors": [ { "first": "P", "middle": [], "last": "Diederik", "suffix": "" }, { "first": "Max", "middle": [], "last": "Kingma", "suffix": "" }, { "first": "", "middle": [], "last": "Welling", "suffix": "" } ], "year": 2014, "venue": "2nd International Conference on Learning Representations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Diederik P. Kingma and Max Welling. 2014. Auto- encoding variational bayes. In 2nd International Conference on Learning Representations, ICLR 2014, Banff, AB, Canada, April 14-16, 2014, Con- ference Track Proceedings.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Conditional random fields: Probabilistic models for segmenting and labeling sequence data", "authors": [ { "first": "John", "middle": [ "D" ], "last": "Lafferty", "suffix": "" }, { "first": "Andrew", "middle": [], "last": "Mccallum", "suffix": "" }, { "first": "Fernando", "middle": [ "C N" ], "last": "Pereira", "suffix": "" } ], "year": 2001, "venue": "Proceedings of the Eighteenth International Conference on Machine Learning", "volume": "", "issue": "", "pages": "282--289", "other_ids": {}, "num": null, "urls": [], "raw_text": "John D. Lafferty, Andrew McCallum, and Fernando C. N. Pereira. 2001. Conditional random fields: Probabilistic models for segmenting and labeling se- quence data. In Proceedings of the Eighteenth Inter- national Conference on Machine Learning (ICML 2001), pages 282-289.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Deterministic non-autoregressive neural sequence modeling by iterative refinement", "authors": [ { "first": "Jason", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Elman", "middle": [], "last": "Mansimov", "suffix": "" }, { "first": "Kyunghyun", "middle": [], "last": "Cho", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "1173--1182", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jason Lee, Elman Mansimov, and Kyunghyun Cho. 2018. Deterministic non-autoregressive neural se- quence modeling by iterative refinement. In Pro- ceedings of the 2018 Conference on Empirical Meth- ods in Natural Language Processing, pages 1173- 1182.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "An end-to-end discriminative approach to machine translation", "authors": [ { "first": "Percy", "middle": [], "last": "Liang", "suffix": "" }, { "first": "Alexandre", "middle": [], "last": "Bouchard-C\u00f4t\u00e9", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Klein", "suffix": "" }, { "first": "Ben", "middle": [], "last": "Taskar", "suffix": "" } ], "year": 2006, "venue": "Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "761--768", "other_ids": {}, "num": null, "urls": [], "raw_text": "Percy Liang, Alexandre Bouchard-C\u00f4t\u00e9, Dan Klein, and Ben Taskar. 2006. An end-to-end discriminative approach to machine translation. In Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Associa- tion for Computational Linguistics, pages 761-768.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Flowseq: Nonautoregressive conditional sequence generation with generative flow", "authors": [ { "first": "Xuezhe", "middle": [], "last": "Ma", "suffix": "" }, { "first": "Chunting", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "Xian", "middle": [], "last": "Li", "suffix": "" }, { "first": "Graham", "middle": [], "last": "Neubig", "suffix": "" }, { "first": "Eduard", "middle": [ "H" ], "last": "Hovy", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xuezhe Ma, Chunting Zhou, Xian Li, Graham Neu- big, and Eduard H. Hovy. 2019. Flowseq: Non- autoregressive conditional sequence generation with generative flow. arXiv preprint arxiv:1909.02480.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Accelerating t-sne using tree-based algorithms", "authors": [ { "first": "Laurens", "middle": [], "last": "Van Der Maaten", "suffix": "" } ], "year": 2014, "venue": "J. Mach. Learn. Res", "volume": "15", "issue": "1", "pages": "3221--3245", "other_ids": {}, "num": null, "urls": [], "raw_text": "Laurens van der Maaten. 2014. Accelerating t-sne us- ing tree-based algorithms. J. Mach. Learn. Res., 15(1):3221-3245.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "Molecular geometry prediction using a deep generative graph neural network", "authors": [ { "first": "Elman", "middle": [], "last": "Mansimov", "suffix": "" }, { "first": "Omar", "middle": [], "last": "Mahmood", "suffix": "" }, { "first": "Seokho", "middle": [], "last": "Kang", "suffix": "" }, { "first": "Kyunghyun", "middle": [], "last": "Cho", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Elman Mansimov, Omar Mahmood, Seokho Kang, and Kyunghyun Cho. 2019. Molecular geometry predic- tion using a deep generative graph neural network. arXiv preprint arxiv:1904.00314.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "Minimum error rate training in statistical machine translation", "authors": [ { "first": "Franz Josef", "middle": [], "last": "Och", "suffix": "" } ], "year": 2003, "venue": "Proceedings of the 41st Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "160--167", "other_ids": {}, "num": null, "urls": [], "raw_text": "Franz Josef Och. 2003. Minimum error rate training in statistical machine translation. In Proceedings of the 41st Annual Meeting of the Association for Com- putational Linguistics, pages 160-167.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "Wavenet: A generative model for raw audio", "authors": [ { "first": "A\u00e4ron", "middle": [], "last": "Van Den Oord", "suffix": "" }, { "first": "Sander", "middle": [], "last": "Dieleman", "suffix": "" }, { "first": "Heiga", "middle": [], "last": "Zen", "suffix": "" }, { "first": "Karen", "middle": [], "last": "Simonyan", "suffix": "" }, { "first": "Oriol", "middle": [], "last": "Vinyals", "suffix": "" }, { "first": "Alex", "middle": [], "last": "Graves", "suffix": "" }, { "first": "Nal", "middle": [], "last": "Kalchbrenner", "suffix": "" }, { "first": "Andrew", "middle": [ "W" ], "last": "Senior", "suffix": "" }, { "first": "Koray", "middle": [], "last": "Kavukcuoglu", "suffix": "" } ], "year": 2016, "venue": "The 9th ISCA Speech Synthesis Workshop", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "A\u00e4ron van den Oord, Sander Dieleman, Heiga Zen, Karen Simonyan, Oriol Vinyals, Alex Graves, Nal Kalchbrenner, Andrew W. Senior, and Koray Kavukcuoglu. 2016. Wavenet: A generative model for raw audio. In The 9th ISCA Speech Synthesis Workshop, page 125.", "links": null }, "BIBREF32": { "ref_id": "b32", "title": "Parallel wavenet: Fast high-fidelity speech synthesis", "authors": [ { "first": "A\u00e4ron", "middle": [], "last": "Van Den Oord", "suffix": "" }, { "first": "Yazhe", "middle": [], "last": "Li", "suffix": "" }, { "first": "Igor", "middle": [], "last": "Babuschkin", "suffix": "" }, { "first": "Karen", "middle": [], "last": "Simonyan", "suffix": "" }, { "first": "Oriol", "middle": [], "last": "Vinyals", "suffix": "" }, { "first": "Koray", "middle": [], "last": "Kavukcuoglu", "suffix": "" }, { "first": "George", "middle": [], "last": "Van Den Driessche", "suffix": "" }, { "first": "Edward", "middle": [], "last": "Lockhart", "suffix": "" }, { "first": "Luis", "middle": [ "C" ], "last": "Cobo", "suffix": "" }, { "first": "Florian", "middle": [], "last": "Stimberg", "suffix": "" }, { "first": "Norman", "middle": [], "last": "Casagrande", "suffix": "" }, { "first": "Dominik", "middle": [], "last": "Grewe", "suffix": "" }, { "first": "Seb", "middle": [], "last": "Noury", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 35th International Conference on Machine Learning, ICML", "volume": "", "issue": "", "pages": "3915--3923", "other_ids": {}, "num": null, "urls": [], "raw_text": "A\u00e4ron van den Oord, Yazhe Li, Igor Babuschkin, Karen Simonyan, Oriol Vinyals, Koray Kavukcuoglu, George van den Driessche, Edward Lock- hart, Luis C. Cobo, Florian Stimberg, Norman Casagrande, Dominik Grewe, Seb Noury, Sander Dieleman, Erich Elsen, Nal Kalchbrenner, Heiga Zen, Alex Graves, Helen King, Tom Walters, Dan Belov, and Demis Hassabis. 2018. Parallel wavenet: Fast high-fidelity speech synthesis. In Proceedings of the 35th International Conference on Machine Learning, ICML, pages 3915-3923.", "links": null }, "BIBREF33": { "ref_id": "b33", "title": "Shakir Mohamed, and Balaji Lakshminarayanan. 2019. Normalizing flows for probabilistic modeling and inference", "authors": [ { "first": "George", "middle": [], "last": "Papamakarios", "suffix": "" }, { "first": "Eric", "middle": [ "T" ], "last": "Nalisnick", "suffix": "" }, { "first": "Danilo", "middle": [ "Jimenez" ], "last": "Rezende", "suffix": "" } ], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "George Papamakarios, Eric T. Nalisnick, Danilo Jimenez Rezende, Shakir Mohamed, and Balaji Lakshminarayanan. 2019. Normalizing flows for probabilistic modeling and inference. arXiv preprint arxiv:1912.02762.", "links": null }, "BIBREF34": { "ref_id": "b34", "title": "Bleu: a method for automatic evaluation of machine translation", "authors": [ { "first": "Kishore", "middle": [], "last": "Papineni", "suffix": "" }, { "first": "Salim", "middle": [], "last": "Roukos", "suffix": "" }, { "first": "Todd", "middle": [], "last": "Ward", "suffix": "" }, { "first": "Wei-Jing", "middle": [], "last": "Zhu", "suffix": "" } ], "year": 2002, "venue": "Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "311--318", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. Bleu: a method for automatic eval- uation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Compu- tational Linguistics, pages 311-318.", "links": null }, "BIBREF35": { "ref_id": "b35", "title": "Minimum phone error and i-smoothing for improved discriminative training", "authors": [ { "first": "D", "middle": [], "last": "Povey", "suffix": "" }, { "first": "P", "middle": [ "C" ], "last": "Woodland", "suffix": "" } ], "year": 2002, "venue": "2002 IEEE International Conference on Acoustics, Speech, and Signal Processing", "volume": "1", "issue": "", "pages": "105--108", "other_ids": {}, "num": null, "urls": [], "raw_text": "D. Povey and P. C. Woodland. 2002. Minimum phone error and i-smoothing for improved discriminative training. In 2002 IEEE International Conference on Acoustics, Speech, and Signal Processing, volume 1, pages I-105-I-108.", "links": null }, "BIBREF36": { "ref_id": "b36", "title": "Automatic speech recognition based on weighted minimum classification error (w-mce) training method", "authors": [ { "first": "Qiang", "middle": [], "last": "Fu", "suffix": "" }, { "first": "Biing-Hwang", "middle": [], "last": "Juang", "suffix": "" } ], "year": 2007, "venue": "IEEE Workshop on Automatic Speech Recognition Understanding (ASRU)", "volume": "", "issue": "", "pages": "278--283", "other_ids": {}, "num": null, "urls": [], "raw_text": "Qiang Fu and Biing-Hwang Juang. 2007. Automatic speech recognition based on weighted minimum classification error (w-mce) training method. In 2007 IEEE Workshop on Automatic Speech Recog- nition Understanding (ASRU), pages 278-283.", "links": null }, "BIBREF37": { "ref_id": "b37", "title": "Sequence level training with recurrent neural networks", "authors": [ { "first": "Aurelio", "middle": [], "last": "Marc", "suffix": "" }, { "first": "Sumit", "middle": [], "last": "Ranzato", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Chopra", "suffix": "" }, { "first": "Wojciech", "middle": [], "last": "Auli", "suffix": "" }, { "first": "", "middle": [], "last": "Zaremba", "suffix": "" } ], "year": 2016, "venue": "4th International Conference on Learning Representations, ICLR", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Marc'Aurelio Ranzato, Sumit Chopra, Michael Auli, and Wojciech Zaremba. 2016. Sequence level train- ing with recurrent neural networks. In 4th Inter- national Conference on Learning Representations, ICLR.", "links": null }, "BIBREF38": { "ref_id": "b38", "title": "Variational inference with normalizing flows", "authors": [ { "first": "Danilo", "middle": [], "last": "Jimenez Rezende", "suffix": "" }, { "first": "Shakir", "middle": [], "last": "Mohamed", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 32nd International Conference on Machine Learning", "volume": "", "issue": "", "pages": "1530--1538", "other_ids": {}, "num": null, "urls": [], "raw_text": "Danilo Jimenez Rezende and Shakir Mohamed. 2015. Variational inference with normalizing flows. In Proceedings of the 32nd International Conference on Machine Learning, pages 1530-1538.", "links": null }, "BIBREF39": { "ref_id": "b39", "title": "Distribution matching in variational inference", "authors": [ { "first": "Mihaela", "middle": [], "last": "Rosca", "suffix": "" }, { "first": "Balaji", "middle": [], "last": "Lakshminarayanan", "suffix": "" }, { "first": "Shakir", "middle": [], "last": "Mohamed", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mihaela Rosca, Balaji Lakshminarayanan, and Shakir Mohamed. 2018. Distribution matching in varia- tional inference. arXiv preprint arxiv:1802.06847.", "links": null }, "BIBREF40": { "ref_id": "b40", "title": "Weight normalization: A simple reparameterization to accelerate training of deep neural networks", "authors": [ { "first": "Tim", "middle": [], "last": "Salimans", "suffix": "" }, { "first": "P", "middle": [], "last": "Diederik", "suffix": "" }, { "first": "", "middle": [], "last": "Kingma", "suffix": "" } ], "year": 2016, "venue": "Advances in Neural Information Processing Systems", "volume": "29", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tim Salimans and Diederik P. Kingma. 2016. Weight normalization: A simple reparameterization to accel- erate training of deep neural networks. In Advances in Neural Information Processing Systems 29, page 901.", "links": null }, "BIBREF41": { "ref_id": "b41", "title": "Japanese and korean voice search", "authors": [ { "first": "Mike", "middle": [], "last": "Schuster", "suffix": "" }, { "first": "Kaisuke", "middle": [], "last": "Nakajima", "suffix": "" } ], "year": 2012, "venue": "2012 IEEE International Conference on Acoustics, Speech and Signal Processing", "volume": "", "issue": "", "pages": "5149--5152", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mike Schuster and Kaisuke Nakajima. 2012. Japanese and korean voice search. In 2012 IEEE Interna- tional Conference on Acoustics, Speech and Signal Processing, ICASSP, pages 5149-5152.", "links": null }, "BIBREF42": { "ref_id": "b42", "title": "Edinburgh neural machine translation systems for WMT 16", "authors": [ { "first": "Rico", "middle": [], "last": "Sennrich", "suffix": "" }, { "first": "Barry", "middle": [], "last": "Haddow", "suffix": "" }, { "first": "Alexandra", "middle": [], "last": "Birch", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the First Conference on Machine Translation, WMT", "volume": "", "issue": "", "pages": "371--376", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Edinburgh neural machine translation sys- tems for WMT 16. In Proceedings of the First Con- ference on Machine Translation, WMT, pages 371- 376.", "links": null }, "BIBREF43": { "ref_id": "b43", "title": "Minimum risk training for neural machine translation", "authors": [ { "first": "Shiqi", "middle": [], "last": "Shen", "suffix": "" }, { "first": "Yong", "middle": [], "last": "Cheng", "suffix": "" }, { "first": "Zhongjun", "middle": [], "last": "He", "suffix": "" }, { "first": "Wei", "middle": [], "last": "He", "suffix": "" }, { "first": "Hua", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Maosong", "middle": [], "last": "Sun", "suffix": "" }, { "first": "Yang", "middle": [], "last": "Liu", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "1683--1692", "other_ids": {}, "num": null, "urls": [], "raw_text": "Shiqi Shen, Yong Cheng, Zhongjun He, Wei He, Hua Wu, Maosong Sun, and Yang Liu. 2016. Minimum risk training for neural machine translation. In Pro- ceedings of the 54th Annual Meeting of the Associa- tion for Computational Linguistics (Volume 1: Long Papers), pages 1683-1692.", "links": null }, "BIBREF44": { "ref_id": "b44", "title": "Latent-variable nonautoregressive neural machine translation with deterministic inference using a delta posterior", "authors": [ { "first": "Raphael", "middle": [], "last": "Shu", "suffix": "" }, { "first": "Jason", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Hideki", "middle": [], "last": "Nakayama", "suffix": "" }, { "first": "Kyunghyun", "middle": [], "last": "Cho", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Raphael Shu, Jason Lee, Hideki Nakayama, and Kyunghyun Cho. 2019. Latent-variable non- autoregressive neural machine translation with de- terministic inference using a delta posterior. arXiv preprint arxiv:1908.07181.", "links": null }, "BIBREF45": { "ref_id": "b45", "title": "Empirical risk minimization of graphical model parameters given approximate inference, decoding, and model structure", "authors": [ { "first": "Veselin", "middle": [], "last": "Stoyanov", "suffix": "" }, { "first": "Alexander", "middle": [], "last": "Ropson", "suffix": "" }, { "first": "Jason", "middle": [], "last": "Eisner", "suffix": "" } ], "year": 2011, "venue": "Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics", "volume": "", "issue": "", "pages": "725--733", "other_ids": {}, "num": null, "urls": [], "raw_text": "Veselin Stoyanov, Alexander Ropson, and Jason Eis- ner. 2011. Empirical risk minimization of graphical model parameters given approximate inference, de- coding, and model structure. In Proceedings of the Fourteenth International Conference on Artificial In- telligence and Statistics, AISTATS, pages 725-733.", "links": null }, "BIBREF46": { "ref_id": "b46", "title": "Sequence to sequence learning with neural networks", "authors": [ { "first": "Ilya", "middle": [], "last": "Sutskever", "suffix": "" }, { "first": "Oriol", "middle": [], "last": "Vinyals", "suffix": "" }, { "first": "V", "middle": [], "last": "Quoc", "suffix": "" }, { "first": "", "middle": [], "last": "Le", "suffix": "" } ], "year": 2014, "venue": "Advances in Neural Information Processing Systems 27: Annual Conference on Neural Information Processing Systems", "volume": "", "issue": "", "pages": "3104--3112", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ilya Sutskever, Oriol Vinyals, and Quoc V. Le. 2014. Sequence to sequence learning with neural networks. In Advances in Neural Information Processing Sys- tems 27: Annual Conference on Neural Information Processing Systems, pages 3104-3112.", "links": null }, "BIBREF47": { "ref_id": "b47", "title": "A family of nonparametric density estimation algorithms", "authors": [ { "first": "E", "middle": [ "G" ], "last": "Tabak", "suffix": "" }, { "first": "Cristina", "middle": [ "V" ], "last": "Turner", "suffix": "" } ], "year": 2013, "venue": "Communications on Pure and Applied Mathematics", "volume": "66", "issue": "2", "pages": "145--164", "other_ids": {}, "num": null, "urls": [], "raw_text": "E. G. Tabak and Cristina V. Turner. 2013. A fam- ily of nonparametric density estimation algorithms. Communications on Pure and Applied Mathematics, 66(2):145-164.", "links": null }, "BIBREF48": { "ref_id": "b48", "title": "VAE with a vampprior", "authors": [ { "first": "M", "middle": [], "last": "Jakub", "suffix": "" }, { "first": "Max", "middle": [], "last": "Tomczak", "suffix": "" }, { "first": "", "middle": [], "last": "Welling", "suffix": "" } ], "year": 2018, "venue": "International Conference on Artificial Intelligence and Statistics", "volume": "", "issue": "", "pages": "1214--1223", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jakub M. Tomczak and Max Welling. 2018. VAE with a vampprior. In International Conference on Ar- tificial Intelligence and Statistics, AISTATS, pages 1214-1223.", "links": null }, "BIBREF49": { "ref_id": "b49", "title": "Mmie training of large vocabulary recognition systems", "authors": [ { "first": "V", "middle": [], "last": "Valtchev", "suffix": "" }, { "first": "J", "middle": [ "J" ], "last": "Odell", "suffix": "" }, { "first": "P", "middle": [ "C" ], "last": "Woodland", "suffix": "" }, { "first": "S", "middle": [ "J" ], "last": "Young", "suffix": "" } ], "year": 1997, "venue": "Speech Commun", "volume": "22", "issue": "4", "pages": "303--314", "other_ids": {}, "num": null, "urls": [], "raw_text": "V. Valtchev, J. J. Odell, P. C. Woodland, and S. J. Young. 1997. Mmie training of large vocabulary recognition systems. Speech Commun., 22(4):303-314.", "links": null }, "BIBREF50": { "ref_id": "b50", "title": "Attention is all you need", "authors": [ { "first": "Ashish", "middle": [], "last": "Vaswani", "suffix": "" }, { "first": "Noam", "middle": [], "last": "Shazeer", "suffix": "" }, { "first": "Niki", "middle": [], "last": "Parmar", "suffix": "" }, { "first": "Jakob", "middle": [], "last": "Uszkoreit", "suffix": "" }, { "first": "Llion", "middle": [], "last": "Jones", "suffix": "" }, { "first": "Aidan", "middle": [ "N" ], "last": "Gomez", "suffix": "" }, { "first": "Lukasz", "middle": [], "last": "Kaiser", "suffix": "" }, { "first": "Illia", "middle": [], "last": "Polosukhin", "suffix": "" } ], "year": 2017, "venue": "Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems", "volume": "", "issue": "", "pages": "5998--6008", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Pro- cessing Systems 30: Annual Conference on Neural Information Processing Systems, pages 5998-6008.", "links": null }, "BIBREF51": { "ref_id": "b51", "title": "A neural conversational model", "authors": [ { "first": "Oriol", "middle": [], "last": "Vinyals", "suffix": "" }, { "first": "V", "middle": [], "last": "Quoc", "suffix": "" }, { "first": "", "middle": [], "last": "Le", "suffix": "" } ], "year": 2015, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Oriol Vinyals and Quoc V. Le. 2015. A neural conver- sational model. arXiv preprint arxiv:1506.05869.", "links": null }, "BIBREF52": { "ref_id": "b52", "title": "Show and tell: A neural image caption generator", "authors": [ { "first": "Oriol", "middle": [], "last": "Vinyals", "suffix": "" }, { "first": "Alexander", "middle": [], "last": "Toshev", "suffix": "" }, { "first": "Samy", "middle": [], "last": "Bengio", "suffix": "" }, { "first": "Dumitru", "middle": [], "last": "Erhan", "suffix": "" } ], "year": 2015, "venue": "IEEE Conference on Computer Vision and Pattern Recognition, CVPR", "volume": "", "issue": "", "pages": "3156--3164", "other_ids": {}, "num": null, "urls": [], "raw_text": "Oriol Vinyals, Alexander Toshev, Samy Bengio, and Dumitru Erhan. 2015. Show and tell: A neural im- age caption generator. In IEEE Conference on Com- puter Vision and Pattern Recognition, CVPR, pages 3156-3164.", "links": null }, "BIBREF53": { "ref_id": "b53", "title": "Graphical models, exponential families, and variational inference. Foundations and Trends in Machine Learning", "authors": [ { "first": "J", "middle": [], "last": "Martin", "suffix": "" }, { "first": "Michael", "middle": [ "I" ], "last": "Wainwright", "suffix": "" }, { "first": "", "middle": [], "last": "Jordan", "suffix": "" } ], "year": 2008, "venue": "", "volume": "1", "issue": "", "pages": "1--305", "other_ids": {}, "num": null, "urls": [], "raw_text": "Martin J. Wainwright and Michael I. Jordan. 2008. Graphical models, exponential families, and varia- tional inference. Foundations and Trends in Ma- chine Learning, 1(1-2):1-305.", "links": null }, "BIBREF54": { "ref_id": "b54", "title": "Understanding knowledge distillation in nonautoregressive machine translation", "authors": [ { "first": "Chunting", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "Graham", "middle": [], "last": "Neubig", "suffix": "" }, { "first": "Jiatao", "middle": [], "last": "Gu", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chunting Zhou, Graham Neubig, and Jiatao Gu. 2019. Understanding knowledge distillation in non- autoregressive machine translation. arXiv preprint arxiv:1911.02727.", "links": null } }, "ref_entries": { "FIGREF0": { "num": null, "text": "models of different sizes: Transformer-big (Tr-L), Transformer-base (Tr-B) and Transformer-small (Tr-S). The first two models have the same hyperparameters as in Vaswani et al. (2017). Transformersmall has 2 attention heads, 5 encoder and decoder layers, d model = 256 and d filter = 1024.", "type_str": "figure", "uris": null }, "FIGREF1": { "num": null, "text": "Visualization of the latent space with 1K samples from the prior (green plus sign), the approximate posterior (blue circle) and the delta posterior (red cross) of Gauss-base (top) and Flow-small (bottom) on a IWSLT'16 De\u2192En test example.", "type_str": "figure", "uris": null }, "TABREF0": { "content": "
presents the comparison of three model
families (Transformer, Gauss, Flow) on five lan-
guage pairs in terms of generation quality (BLEU)
and log-likelihood (LL). We present two sets of re-
sults: one from models trained on raw data (Raw),
", "num": null, "type_str": "table", "html": null, "text": "" }, "TABREF1": { "content": "", "num": null, "type_str": "table", "html": null, "text": "Test BLEU score and log-likelihood of each model. Raw: models trained on raw data. Dist.: models trained on distilled data. Tr-S: Transformer-small." }, "TABREF3": { "content": "
", "num": null, "type_str": "table", "html": null, "text": "Pearson's correlation between log-likelihood and BLEU across the training checkpoints of Transformer-base, Gauss-base and Flow-base on WMT'14 En\u2192De." }, "TABREF5": { "content": "
: Iterative inference with a delta posterior im-
proves BLEU and ELBO for Gauss-base and Flow-
base on IWSLT'16 De\u2192En (without distillation).
", "num": null, "type_str": "table", "html": null, "text": "" } } } }