{ "paper_id": "2020", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T02:12:18.213926Z" }, "title": "Towards a Better Understanding of Label Smoothing in Neural Machine Translation", "authors": [ { "first": "Yingbo", "middle": [], "last": "Gao", "suffix": "", "affiliation": { "laboratory": "", "institution": "RWTH Aachen University", "location": { "postCode": "D-52056", "settlement": "Aachen", "country": "Germany" } }, "email": "" }, { "first": "Weiyue", "middle": [], "last": "Wang", "suffix": "", "affiliation": { "laboratory": "", "institution": "RWTH Aachen University", "location": { "postCode": "D-52056", "settlement": "Aachen", "country": "Germany" } }, "email": "" }, { "first": "Christian", "middle": [], "last": "Herold", "suffix": "", "affiliation": { "laboratory": "", "institution": "RWTH Aachen University", "location": { "postCode": "D-52056", "settlement": "Aachen", "country": "Germany" } }, "email": "" }, { "first": "Zijian", "middle": [], "last": "Yang", "suffix": "", "affiliation": { "laboratory": "", "institution": "RWTH Aachen University", "location": { "postCode": "D-52056", "settlement": "Aachen", "country": "Germany" } }, "email": "" }, { "first": "Hermann", "middle": [], "last": "Ney", "suffix": "", "affiliation": { "laboratory": "", "institution": "RWTH Aachen University", "location": { "postCode": "D-52056", "settlement": "Aachen", "country": "Germany" } }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "In order to combat overfitting and in pursuit of better generalization, label smoothing is widely applied in modern neural machine translation systems. The core idea is to penalize over-confident outputs and regularize the model so that its outputs do not diverge too much from some prior distribution. While training perplexity generally gets worse, label smoothing is found to consistently improve test performance. In this work, we aim to better understand label smoothing in the context of neural machine translation. Theoretically, we derive and explain exactly what label smoothing is optimizing for. Practically, we conduct extensive experiments by varying which tokens to smooth, tuning the probability mass to be deducted from the true targets and considering different prior distributions. We show that label smoothing is theoretically wellmotivated, and by carefully choosing hyperparameters, the practical performance of strong neural machine translation systems can be further improved.", "pdf_parse": { "paper_id": "2020", "_pdf_hash": "", "abstract": [ { "text": "In order to combat overfitting and in pursuit of better generalization, label smoothing is widely applied in modern neural machine translation systems. The core idea is to penalize over-confident outputs and regularize the model so that its outputs do not diverge too much from some prior distribution. While training perplexity generally gets worse, label smoothing is found to consistently improve test performance. In this work, we aim to better understand label smoothing in the context of neural machine translation. Theoretically, we derive and explain exactly what label smoothing is optimizing for. Practically, we conduct extensive experiments by varying which tokens to smooth, tuning the probability mass to be deducted from the true targets and considering different prior distributions. We show that label smoothing is theoretically wellmotivated, and by carefully choosing hyperparameters, the practical performance of strong neural machine translation systems can be further improved.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "In recent years, Neural Network (NN) models bring steady and concrete improvements on the task of Machine Translation (MT). From the introduction of sequence-to-sequence models (Cho et al., 2014; Sutskever et al., 2014a) , to the invention of the attention mechanism (Bahdanau et al., 2015; Luong et al., 2015) , end-to-end sequence learning with attention becomes the dominant design choice for Neural Machine Translation (NMT) models. From the study of convolutional sequence to sequence learning (Gehring et al., 2017a,b) , to the prosperity of self-attention networks (Vaswani et al., 2017; Devlin et al., 2019) , modern NMT systems, especially Transformer-based ones (Vaswani et al., 2017) , often deliver state-of-the-art performances (Bojar et al., 2018; Barrault et al., 2019) , even under the condition of large-scale corpora .", "cite_spans": [ { "start": 177, "end": 195, "text": "(Cho et al., 2014;", "ref_id": "BIBREF10" }, { "start": 196, "end": 220, "text": "Sutskever et al., 2014a)", "ref_id": "BIBREF40" }, { "start": 267, "end": 290, "text": "(Bahdanau et al., 2015;", "ref_id": "BIBREF1" }, { "start": 291, "end": 310, "text": "Luong et al., 2015)", "ref_id": "BIBREF23" }, { "start": 499, "end": 524, "text": "(Gehring et al., 2017a,b)", "ref_id": null }, { "start": 572, "end": 594, "text": "(Vaswani et al., 2017;", "ref_id": "BIBREF45" }, { "start": 595, "end": 615, "text": "Devlin et al., 2019)", "ref_id": "BIBREF13" }, { "start": 672, "end": 694, "text": "(Vaswani et al., 2017)", "ref_id": "BIBREF45" }, { "start": 741, "end": 761, "text": "(Bojar et al., 2018;", "ref_id": "BIBREF7" }, { "start": 762, "end": 784, "text": "Barrault et al., 2019)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In Transformer-based models, label smoothing is a widely applied method to improve model performance. Szegedy et al. (2016) initially introduce the method when making refinements to the Inception (Szegedy et al., 2015) model, with the motivation to combat overfitting and improve adaptability. In principle, label smoothing discounts a certain probability mass from the true label and redistributes it uniformly across all the class labels. This lowers the difference between the largest probability output and the others, effectively discouraging the model to generate overly confident predictions. Since information entropy (Shannon, 1948) can be thought of as a confidence measure of a probability distribution, Pereyra et al. (2017) add a negative entropy regularization term to the conventional cross entropy training criterion and compare it with uniform smoothing and unigram smoothing. deliver further insightful discussions about label smoothing, empirically investigating it in terms of model calibration, knowledge distillation and representation learning.", "cite_spans": [ { "start": 102, "end": 123, "text": "Szegedy et al. (2016)", "ref_id": "BIBREF43" }, { "start": 196, "end": 218, "text": "(Szegedy et al., 2015)", "ref_id": "BIBREF44" }, { "start": 626, "end": 641, "text": "(Shannon, 1948)", "ref_id": "BIBREF36" }, { "start": 715, "end": 736, "text": "Pereyra et al. (2017)", "ref_id": "BIBREF30" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Label smoothing itself is an interesting topic that brings insights about the general learnability of a neural model. While existing methods are rather heuristical in their nature, the fact that simply discounting some probability mass from the true label and redistributing it with some prior distribution (see Figure 1 for an illustration) works in practice, is worth to be better understood.", "cite_spans": [], "ref_spans": [ { "start": 312, "end": 320, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this paper, we raise two high-level research questions to outline our work:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "1. Theoretically, what is label smoothing (or the related confidence penalty) optimizing for?", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "2. Practically, what is a good recipe in order to apply label smoothing successfully in NMT? Figure 1 : An illustration of label smoothing with various prior distributions. m and B are discounted probabiltiy masses. V is the vocabulary size and v 0 is the correct target word. 1 V , A and r v are prior distributions. Smoothing with (a), m is equally redistributed across the vocabulary. Smoothing with (b), A is implicitly 1 V everywhere as well, and the exact value of B can be obtained (Section 3.2). Smoothing with (c), m goes to each class in proportion to an arbitrary smoothing prior r v (Section 4.3).", "cite_spans": [ { "start": 277, "end": 278, "text": "1", "ref_id": null } ], "ref_spans": [ { "start": 93, "end": 101, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "V v0 1 V m (a) uniform distribution V v0 A B (b) confidence penalty V v0 rv m (c) arbitrary distribution", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The presentation of our results is organized into three major sections:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 First, we introduce a generalized formula for label smoothing and derive the theoretical solution to the training problem.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 Second, we investigate various aspects that affect the training process and show an empirically good recipe to apply label smoothing.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 Finally, we examine the implications in search and scoring and motivate further research into the mismatch between training and testing.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The extensive use of NNs in MT (Bojar et al., 2016 (Bojar et al., , 2017 (Bojar et al., , 2018 Barrault et al., 2019 ) is a result of many pioneering and inspiring works. Continuousvalued word vectors lay the foundation of modern Natural Language Processing (NLP) NNs, capturing semantic and syntactic relations and providing numerical ways to calculate meaningful distances among words (Bengio et al., 2001; Schwenk et al., 2006; Schwenk, 2007; Sundermeyer et al., 2012; Mikolov et al., 2013a,b) . The investigations of sequence-to-sequence learning (Cho et al., 2014; Sutskever et al., 2014b) , the studies of attention mechanism (Bahdanau et al., 2015; Luong et al., 2015) and the explorations into convolutional and self-attention NNs (Gehring et al., 2017a,b; Vaswani et al., 2017) mark steady and important steps in the field of NMT. Since the introduction of BERT (Devlin et al., 2019) , the Transformer model (Vaswani et al., 2017) becomes the de facto architectural choice for many competitive NLP systems. Among the numerous ingredients that make Transformer networks successful, label smoothing is one that must not be overlooked and shall be the focus of this work.", "cite_spans": [ { "start": 31, "end": 50, "text": "(Bojar et al., 2016", "ref_id": "BIBREF6" }, { "start": 51, "end": 72, "text": "(Bojar et al., , 2017", "ref_id": "BIBREF5" }, { "start": 73, "end": 94, "text": "(Bojar et al., , 2018", "ref_id": "BIBREF7" }, { "start": 95, "end": 116, "text": "Barrault et al., 2019", "ref_id": "BIBREF2" }, { "start": 387, "end": 408, "text": "(Bengio et al., 2001;", "ref_id": "BIBREF3" }, { "start": 409, "end": 430, "text": "Schwenk et al., 2006;", "ref_id": "BIBREF34" }, { "start": 431, "end": 445, "text": "Schwenk, 2007;", "ref_id": "BIBREF33" }, { "start": 446, "end": 471, "text": "Sundermeyer et al., 2012;", "ref_id": "BIBREF39" }, { "start": 472, "end": 496, "text": "Mikolov et al., 2013a,b)", "ref_id": null }, { "start": 551, "end": 569, "text": "(Cho et al., 2014;", "ref_id": "BIBREF10" }, { "start": 570, "end": 594, "text": "Sutskever et al., 2014b)", "ref_id": "BIBREF42" }, { "start": 632, "end": 655, "text": "(Bahdanau et al., 2015;", "ref_id": "BIBREF1" }, { "start": 656, "end": 675, "text": "Luong et al., 2015)", "ref_id": "BIBREF23" }, { "start": 739, "end": 764, "text": "(Gehring et al., 2017a,b;", "ref_id": null }, { "start": 765, "end": 786, "text": "Vaswani et al., 2017)", "ref_id": "BIBREF45" }, { "start": 871, "end": 892, "text": "(Devlin et al., 2019)", "ref_id": "BIBREF13" }, { "start": 917, "end": 939, "text": "(Vaswani et al., 2017)", "ref_id": "BIBREF45" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "The idea of smoothing is not new in itself. For instance, many smoothing heuristics and functions are investigated in the context of count-based language modeling (Jelinek and Mercer, 1980; Katz, 1987; Church and Gale, 1991; Kneser and Ney, 1995; Chen and Goodman, 1996) . Interestingly, when training NNs, the idea of smoothing comes in a new form and is applied on the empirical one-hot target distributions.", "cite_spans": [ { "start": 163, "end": 189, "text": "(Jelinek and Mercer, 1980;", "ref_id": "BIBREF19" }, { "start": 190, "end": 201, "text": "Katz, 1987;", "ref_id": "BIBREF20" }, { "start": 202, "end": 224, "text": "Church and Gale, 1991;", "ref_id": "BIBREF11" }, { "start": 225, "end": 246, "text": "Kneser and Ney, 1995;", "ref_id": "BIBREF22" }, { "start": 247, "end": 270, "text": "Chen and Goodman, 1996)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Proposed to counteract overfitting and pursue better generalization, label smoothing (Szegedy et al., 2016) finds its first applications in NNs in the field of computer vision. Later, the method is shown to be effective in MT (Vaswani et al., 2017) . Furthermore, it is also helpful when applied in other scenarios, e.g. Generative Adversarial Networks (GANs) (Salimans et al., 2016) , automatic speech recognition (Chiu et al., 2018) , and person re-identification (Ainam et al., 2019) . Since the method centralizes on the idea of avoiding over-confident model outputs on training data, it is reanalyzed in Pereyra et al. (2017) . The authors include an additional confidence penalty regularization term in the training loss, and compare it to standard label smoothing with uniform or unigram prior. While label smoothing boosts performance significantly compared to using hard target labels, the difference in performance gains when comparing different smoothing methods is relatively small. bring recent advancements towards better intuitive understandings of label smoothing. They observe a clustering effect of learned features and argue that label smoothing improves model calibration, yet hurting knowledge distillation when the model is used as a teacher for another student network.", "cite_spans": [ { "start": 85, "end": 107, "text": "(Szegedy et al., 2016)", "ref_id": "BIBREF43" }, { "start": 226, "end": 248, "text": "(Vaswani et al., 2017)", "ref_id": "BIBREF45" }, { "start": 360, "end": 383, "text": "(Salimans et al., 2016)", "ref_id": "BIBREF32" }, { "start": 415, "end": 434, "text": "(Chiu et al., 2018)", "ref_id": "BIBREF9" }, { "start": 466, "end": 486, "text": "(Ainam et al., 2019)", "ref_id": "BIBREF0" }, { "start": 609, "end": 630, "text": "Pereyra et al. (2017)", "ref_id": "BIBREF30" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "As a regularization technique in training, label smoothing can be compared against other methods such as dropout (Srivastava et al., 2014) and Dis-turbLabel (Xie et al., 2016) . Intuitively, dropout can be viewed as ensembling different model architectures on the same data and DisturbLabel can be viewed as ensembling the same model architecture on different data, as pointed out in Xie et al. (2016) . Interestingly, label smoothing can also be understood as estimating the marginalized label dropout during training (Pereyra et al., 2017) . In this paper, we propose two straightforward extensions to label smoothing, examining token selection and prior distribution. Salimans et al. (2016) and Zhou et al. (2017) investigate a similar issue to the former. In the context of GANs, they select only those positive examples to smooth while we consider the task of MT, discussing how many tokens to smooth and how they should be selected. Pereyra et al. (2017) and Gao et al. (2019) talk about ideas similar to the latter. In their respective contexts, one experiments with unigram probabilities for label smoothing and the other uses Language Model (LM) posteriors to softly augment the source and target side of MT training data.", "cite_spans": [ { "start": 113, "end": 138, "text": "(Srivastava et al., 2014)", "ref_id": "BIBREF37" }, { "start": 157, "end": 175, "text": "(Xie et al., 2016)", "ref_id": "BIBREF48" }, { "start": 384, "end": 401, "text": "Xie et al. (2016)", "ref_id": "BIBREF48" }, { "start": 519, "end": 541, "text": "(Pereyra et al., 2017)", "ref_id": "BIBREF30" }, { "start": 671, "end": 693, "text": "Salimans et al. (2016)", "ref_id": "BIBREF32" }, { "start": 698, "end": 716, "text": "Zhou et al. (2017)", "ref_id": "BIBREF50" }, { "start": 939, "end": 960, "text": "Pereyra et al. (2017)", "ref_id": "BIBREF30" }, { "start": 965, "end": 982, "text": "Gao et al. (2019)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "The standard label smoothing (STN) loss, as used by Vaswani et al. (2017) , can be expressed as:", "cite_spans": [ { "start": 52, "end": 73, "text": "Vaswani et al. (2017)", "ref_id": "BIBREF45" } ], "ref_spans": [], "eq_spans": [], "section": "Solving the Training Problem", "sec_num": "3" }, { "text": "L STN = \u2212 N n=1 V v=1 (1 \u2212 m)p v + m 1 V log q v", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Solving the Training Problem", "sec_num": "3" }, { "text": "(1) where L STN denotes the cross entropy with standard label smoothing, n is a running index in the total number of training tokens N , v is a running index in the target vocabulary V , m is the hyperparameter that controls the amount of probability mass to discount, p v is the one-hot true target distribution and q v is the output distribution of the model.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Solving the Training Problem", "sec_num": "3" }, { "text": "The confidence penalty (CFD) loss, as used by Pereyra et al. (2017) , can be expressed as:", "cite_spans": [ { "start": 46, "end": 67, "text": "Pereyra et al. (2017)", "ref_id": "BIBREF30" } ], "ref_spans": [], "eq_spans": [], "section": "Solving the Training Problem", "sec_num": "3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "L CFD = \u2212 N n=1 V v=1 p v \u2212 m q v log q v", "eq_num": "(2)" } ], "section": "Solving the Training Problem", "sec_num": "3" }, { "text": "where L CFD denotes the confidence-penalized cross entropy, m in this case is the hyperparameter that controls the strength of the confidence penalty and thus differs from m in Equation 1. In both cases, the outer summation is over all of the training tokens N , implicating that all of the target token probabilities are smoothed. The dependencies of q v and p v on n are omitted for simplicity.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Solving the Training Problem", "sec_num": "3" }, { "text": "Additionally for Equation 1, authors of both papers (Vaswani et al., 2017; Pereyra et al., 2017) point out that the uniform prior can be replaced with alternative distributions over the target vocabulary. One more thing to notice is the negative sign in front of the non-negative term m in Equation 2, which means that p v \u2212 m q v is not a probability distribution anymore. One can nonetheless apply tricks to normalize the term inside the parentheses so that it becomes a probability distribution, e.g.:", "cite_spans": [ { "start": 52, "end": 74, "text": "(Vaswani et al., 2017;", "ref_id": "BIBREF45" }, { "start": 75, "end": 96, "text": "Pereyra et al., 2017)", "ref_id": "BIBREF30" } ], "ref_spans": [], "eq_spans": [], "section": "Solving the Training Problem", "sec_num": "3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "L CFD normalized 1 = \u2212 N n=1 V v=1 log q v \u2022 (p v \u2212 m q v ) \u2212 min(p v \u2212 m q v ) V v =1 (p v \u2212 m q v ) \u2212 min(p v \u2212 m q v )", "eq_num": "(3)" } ], "section": "Solving the Training Problem", "sec_num": "3" }, { "text": "or", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Solving the Training Problem", "sec_num": "3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "L CFD normalized 2 = \u2212 N n=1 V v=1 log q v \u2022 exp(p v \u2212 m q v ) V v =1 exp(p v \u2212 m q v )", "eq_num": "(4)" } ], "section": "Solving the Training Problem", "sec_num": "3" }, { "text": "and implement it as an additional layer of activation during training, where v is an alternative running index in the vocabulary. In any case, the integration of Equation 2 into the form of Equation 1 cannot be done without significantly modifying the original confidence penalty, and we leave it for future work.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Solving the Training Problem", "sec_num": "3" }, { "text": "In an effort to obtain a unified view, we propose a simple generalized formula and make two major changes. First, we separate the outer summation over the tokens and divide it into two summations, namely \"not to smooth\" and \"to smooth\". Second, we modify the prior distribution to allow it to depend on the position, current token and model output. In this case, r could be the posterior from some helper model (e.g. an LM), and during training, obtaining it on-the-fly is not expensive, as previously shown (Bi et al., 2019; . The generalized label smoothing (GNR) loss can be expressed as:", "cite_spans": [ { "start": 508, "end": 525, "text": "(Bi et al., 2019;", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Generalized Formula", "sec_num": "3.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "L GNR = \u2212 n\u2208A V v=1 p v log q v \u2212 n\u2208B V v=1 ((1 \u2212 m)p v + mr v,qv ) log q v", "eq_num": "(5)" } ], "section": "Generalized Formula", "sec_num": "3.1" }, { "text": "where L GNR denotes the generalized cross entropy, A is the set of tokens not to smooth, B is the set of tokens to smooth, r v,qv is an arbitrary prior distribution for smoothing and again we drop the dependencies of p v , q v and r v,qv on n for simplicity. A natural question when explicitly writing out A and B, s.t. A \u2229 B = \u2205 and |A \u222a B| = N , is which tokens to include in B. Here, we consider two simple ideas: uniform random sampling (RND) and an entropy-based uncertainty heuristic (ENT). The former chooses a certain percentage of tokens to smooth by sampling tokens uniformly at random. The latter prioritizes those tokens whose prior distributions have higher entropy. The logic behind the ENT formulation is that when the prior distribution is flattened out, yielding a higher entropy, the helper model is uncertain about the current position, and the model output should thus be smoothed. Formally, the two heuristics can be expressed as:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Generalized Formula", "sec_num": "3.1" }, { "text": "B RND = {n; \u03c1 n \u223c U (0, 1), \u03c1 n \u2264 \u03c0} (6) B ENT = {b 1 , b 2 , ..., b \u03c0N } (7)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Generalized Formula", "sec_num": "3.1" }, { "text": "where \u03c1 n is a sample from the uniform distribution U in [0, 1], \u03c0 is a hyperparameter controlling the percentage of tokens to smooth and", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Generalized Formula", "sec_num": "3.1" }, { "text": "{b 1 , b 2 , ..., b N } is a permutation of data indices {1, 2, ...N } in de- scending order of the entropy of prior r, i.e. \u22001 \u2264 i \u2264 j \u2264 N , \u2212 V r b i log r b i \u2265 \u2212 V r b j log r b j .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Generalized Formula", "sec_num": "3.1" }, { "text": "The hyperparameter m in Equation 5 deserves some further notice. This is essentially the parameter that controls the strength of the label smoothing procedure. When it is zero, no smoothing is done. When it is one and |B| = N , the model is optimized to output the prior distribution r. One can obviously further generalize it so that m depends also on n, v and q v . However in this work, we focus on the outer summation in N and alternative priors r, and leave the exploration of adaptive smoothing strength m n,r,qv for future work.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Generalized Formula", "sec_num": "3.1" }, { "text": "When it comes to the analysis of label smoothing, previous works focus primarily on intuitive understandings. Pereyra et al. (2017) observe that both label smoothing and confidence penalty lead to smaller gradient norms during training. argue that label smoothing helps beamsearch by improving model calibration. They further visualize the learned features and show a clustering effect of features from the same class. In this work, we concentrate on finding a theoretical solution to the training problem, and show exactly what label smoothing and confidence penalty are optimizing for.", "cite_spans": [ { "start": 110, "end": 131, "text": "Pereyra et al. (2017)", "ref_id": "BIBREF30" } ], "ref_spans": [], "eq_spans": [], "section": "Theoretical Solution", "sec_num": "3.2" }, { "text": "Consider the optimization problem when training with Equation 1:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Theoretical Solution", "sec_num": "3.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "min q 1 ,q 2 ,...,q V L STN n , s.t. V v=1 q v = 1", "eq_num": "(8)" } ], "section": "Theoretical Solution", "sec_num": "3.2" }, { "text": "While in practice we use gradient optimizers to obtain a good set of parameters of the NN, the optimization problem actually has well-defined analytical solutions locally:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Theoretical Solution", "sec_num": "3.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "q STN v = (1 \u2212 m)p v + m 1 V", "eq_num": "(9)" } ], "section": "Theoretical Solution", "sec_num": "3.2" }, { "text": "which is simply a linear interpolation between the one-hot target distribution p v and the smoothing prior 1 V , with m \u2208 [0, 1] being the interpolation weight. One can use either the divergence inequality or the Lagrange multiplier method to obtain this result (see Appendix A).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Theoretical Solution", "sec_num": "3.2" }, { "text": "Consider the optimization problem when training with Equation 2:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Theoretical Solution", "sec_num": "3.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "min q 1 ,q 2 ,...,q V L CFD n , s.t. V v=1 q v = 1", "eq_num": "(10)" } ], "section": "Theoretical Solution", "sec_num": "3.2" }, { "text": "The problem becomes harder because now the regularization term also depends on q v . Introducing the Lagrange multiplier \u03bb and solving for optima will result in a transcendental equation. Making use of the Lambert W function (Corless et al., 1996) , the solution can be expressed as (see Appendix A for detailed derivation):", "cite_spans": [ { "start": 225, "end": 247, "text": "(Corless et al., 1996)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Theoretical Solution", "sec_num": "3.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "q CFD v = p v m W 0 pv m e 1+ \u03bb m", "eq_num": "(11)" } ], "section": "Theoretical Solution", "sec_num": "3.2" }, { "text": "where W 0 is the principal branch of the Lambert W function and \u03bb is the Lagrange multiplier, which is numerically solvable 1 when non-negative m and probability distribution p v are given. Equation 11 essentially gives a non-linear relationship betwee\u00f1", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Theoretical Solution", "sec_num": "3.2" }, { "text": "q CFD v", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Theoretical Solution", "sec_num": "3.2" }, { "text": "and p v , controlled by the hyperparameter m . Now that theoretical solutions are presented in Equation 9 and 11, it is possible to plot the graphs of optimalq v , with respect to m and m . Shown in Figure 2 , as expected for both STN and CFD, the overall effect is to decrease q v when p v = 1 and increase q v when p v = 0. When m or m gets large ", "cite_spans": [], "ref_spans": [ { "start": 199, "end": 207, "text": "Figure 2", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Theoretical Solution", "sec_num": "3.2" }, { "text": "q v \u00d710 \u22125 1 \u1e7c q CFD v vs. m \u2032 q STD v vs. m (b) pv = 0", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Theoretical Solution", "sec_num": "3.2" }, { "text": "v andq CFD v", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Theoretical Solution", "sec_num": "3.2" }, { "text": ", we set V = 32000, which is a common vocabulary size when operating on sub-word levels.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Theoretical Solution", "sec_num": "3.2" }, { "text": "enough, the total probability mass is discounted and 1", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Theoretical Solution", "sec_num": "3.2" }, { "text": "V is redistributed to each token in the vocabulary. The graph of GRN 2 is similar to STD, only changing the limit from 1 V to r v as m approaches one, and not included here for brevity. One last thing to notice is that the outer summation over the tokens is ignored. If it is taken into consideration,q is dragged towards the empirical distribution given by the corpus 3 .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Theoretical Solution", "sec_num": "3.2" }, { "text": "In this section, we describe our results and insights towards a good recipe to successfully apply label smoothing. We experiment with six IWSLT2014 datasets: German (de), Spanish (es), Italian (it), Dutch (nl), Romanian (ro), Russian (ru) to English (en), and one WMT2014 dataset: English to German. The statistics of these datasets are summarized in Table 1 . To prepare the subword tokens, we adopt joint byte pair encoding (Sennrich et al., 2016) , and use 10K and 32K merge operations on IWSLT and WMT, respectively. When preprocessing IWSLT, we remove sentences longer than 175 words, lowercase both source and target sides, randomly subsample roughly 4.35% of the training sentence pairs as development data and concatenate all previously available development and test sets as test data, similar to Gehring et al. (2017a) . As for the preprocessing of WMT, we follow the setup in . Using the Transformer architec-ture (Vaswani et al., 2017) , we apply the base setup for IWSLT and the big setup for WMT. For all language pairs, we share all three embedding matrices. All helper models are also Transformer-based. We conduct all experiments using fairseq (Ott et al., 2019) , monitor development set perplexity during training, and report BLEU (Papineni et al., 2002) scores on test sets after beam search.", "cite_spans": [ { "start": 426, "end": 449, "text": "(Sennrich et al., 2016)", "ref_id": "BIBREF35" }, { "start": 806, "end": 828, "text": "Gehring et al. (2017a)", "ref_id": "BIBREF16" }, { "start": 925, "end": 947, "text": "(Vaswani et al., 2017)", "ref_id": "BIBREF45" }, { "start": 1161, "end": 1179, "text": "(Ott et al., 2019)", "ref_id": "BIBREF27" }, { "start": 1250, "end": 1273, "text": "(Papineni et al., 2002)", "ref_id": "BIBREF29" } ], "ref_spans": [ { "start": 351, "end": 358, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "Finding a Good Recipe", "sec_num": "4" }, { "text": "The first thing to determine is how to select tokens for smoothing and how many tokens to smooth. For this purpose, we begin by considering models smoothed with an LM helper. The helper LM is trained on target sentences from the corresponding parallel data till convergence. Figure 3 shows a comparison between RND and ENT, varying the percentage of smoothed tokens \u03c0 and using the absolute performance improvements in BLEU as the vertical axis. Since the two methods only affect the order in which tokens are selected, they should yield the exact same results when all tokens are selected. This can be clearly seen from the figure and serves as a sanity check for the correctness of dataset IWSLT WMT language pair de-en es-en it-en nl-en ro-en ru-en en- the implementation. The RND and ENT curves follow a similar trend, increasing with the number of smoothed tokens. From the curves, neither selection method is consistently better than the other, indicating that the entropy-based selection heuristics is probably an oversimplification considering the stochasticity introduced when altering the number of smoothed tokens. We continue to examine the uphill trend seen in Figure 3 in other cases. Figure 4 reveals the relationship between absolute BLEU improvements and \u03c0, when smoothing with uniform or unigram (RND) distributions. While for each language pair the actual changes in BLEU differ, it is clear to conclude that, the more tokens smoothed, the better the performance. This conclusion is rather universal and holds true for the majority of our experiment settings (varying m and r). From here on, we smooth all tokens, i.e. |B| = N , by default.", "cite_spans": [], "ref_spans": [ { "start": 275, "end": 283, "text": "Figure 3", "ref_id": "FIGREF2" }, { "start": 1174, "end": 1182, "text": "Figure 3", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "Token Selection", "sec_num": "4.1" }, { "text": "Our next goal is to find good values of m. The discounted probability mass m is a tunable hyperparameter that is set to 0.1 in the original Transformer (Vaswani et al., 2017) paper. We vary this parameter in the case of uniform smoothing and unigram smoothing, and plot the results in Figure 5 . As shown in Figure 5a , the BLEU score immediately improves at m = 0.1, then plateaus when m \u2208 [0.3, 0.6], slowly decreases when m \u2208 [0.7, 0.9] and quickly drops to zero when m approaches one. When m = 1, the model is optimized towards a uniform distribution and completely ignores the training data. Because perplexity can be thought of as the effective vocabulary size of a model, we examine the perplexities when m = 1 for both language pairs. As expected, the development perplexities are around 10K, which is in the same order of magnitude as the corresponding vocabulary sizes. Another interesting observation is that the BLEU scores only drop when m gets close to one and the model produces acceptable translations elsewhere. This indicates that NN models trained with gradient optimizers are very good at picking out the effective training signals even when they are buried in much stronger noise signals (the uniform smoothing priors in the case of Figure 5a ). This could be further related to multi-task learning (Ruder, 2017) , where the system performances are also related to the regularization weights of the auxiliary losses. For unigram, we vary m in {0.1, 0.2, 0.3}. As seen in Figure 5b , while smoothing with m = 0.1 gives a large improvement over no smoothing, setting m = 0.3 further boosts the performance, consistently for all six IWSLT language pairs.", "cite_spans": [ { "start": 152, "end": 174, "text": "(Vaswani et al., 2017)", "ref_id": "BIBREF45" }, { "start": 1320, "end": 1333, "text": "(Ruder, 2017)", "ref_id": "BIBREF31" } ], "ref_spans": [ { "start": 285, "end": 293, "text": "Figure 5", "ref_id": "FIGREF5" }, { "start": 308, "end": 317, "text": "Figure 5a", "ref_id": "FIGREF5" }, { "start": 1254, "end": 1263, "text": "Figure 5a", "ref_id": "FIGREF5" }, { "start": 1492, "end": 1502, "text": "Figure 5b", "ref_id": "FIGREF5" } ], "eq_spans": [], "section": "Probability Mass", "sec_num": "4.2" }, { "text": "Furthermore, we explore the use of LM and MT posteriors as prior distributions for smoothing. We train systems using Transformer LMs and MT models of different qualities for label smoothing, as in Figure 6 . To obtain very good LMs, we train them with test data and mark the cheating LMs in Figure 6a . We additionally plot the BLEU scores of models with no smoothing, smoothed with uniform and unigram, as horizontal lines to compare the absolute performances. Intuitively, the curve should follow a downhill trend, meaning that the worse the helper model performs, the worse the model smoothed with it performs. This is loosely the case for LM, with cheating LMs giving better performances than uniform and unigram, and normal LMs lacking behind. As for MT, improvement over the no smoothing case is seen in Figure 6b . However, neither the downhill trend nor the competence over other priors in terms of BLEU, is seen. This suggests that the model is probably not utilizing the information in the soft distribution effectively. Related to knowledge distillation (Hinton et al., 2015; Kim and Rush, 2016) , a trainable teacher (the helper model in our case) might be further beneficial (Bi et al., 2019; Wang et al., 2018) .", "cite_spans": [ { "start": 1066, "end": 1087, "text": "(Hinton et al., 2015;", "ref_id": "BIBREF18" }, { "start": 1088, "end": 1107, "text": "Kim and Rush, 2016)", "ref_id": "BIBREF21" }, { "start": 1189, "end": 1206, "text": "(Bi et al., 2019;", "ref_id": "BIBREF4" }, { "start": 1207, "end": 1225, "text": "Wang et al., 2018)", "ref_id": "BIBREF46" } ], "ref_spans": [ { "start": 197, "end": 205, "text": "Figure 6", "ref_id": "FIGREF6" }, { "start": 291, "end": 300, "text": "Figure 6a", "ref_id": "FIGREF6" }, { "start": 810, "end": 820, "text": "Figure 6b", "ref_id": "FIGREF6" } ], "eq_spans": [], "section": "Prior Distribution", "sec_num": "4.3" }, { "text": "One important thing to mention is that, while neither LM nor MT outperforms uniform or unigram in terms of test BLEU score in our experiments, we see significant drops in development set perplexities when smoothing with LM or MT. This signals a mismatch between training and testing, and suggests that smoothing with LM or MT indeed works well for the optimization criterion, but not as much for the final metric, the calculation of which involves beam search and scoring of the discrete tokens.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Prior Distribution", "sec_num": "4.3" }, { "text": "Finally, we report BLEU scores of our best systems across all language pairs in Table 2 . While applying uniform label smoothing significantly improves over the baselines, by using a good recipe, an additional improvement of around +0.5 BLEU is obtained across all language pairs. For the hyperparameters, we find that smoothing all tokens by m = 0.3 with a unigram prior is a good recipe, consistently giving one of the best BLEU scores.", "cite_spans": [], "ref_spans": [ { "start": 80, "end": 87, "text": "Table 2", "ref_id": null } ], "eq_spans": [], "section": "Final Results", "sec_num": "4.4" }, { "text": "As discussed in Section 4.3, models smoothed with LMs or MT model posteriors yield very good development set perplexities but no big improvements in terms of test BLEU scores. Here, we further investigate this phenomenon in terms of search and scoring.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Analyzing the Mismatch", "sec_num": "5" }, { "text": "We first plot the test BLEU scores with respect to the beam size used during search. In Figure 7 , we see that the dashed curves for \"no smoothing\", \"uniform\" and \"unigram\" initially increase and then plateau, which is an expected shape (see Figure 8 dataset IWSLT WMT language pair de-en es-en it-en nl-en ro-en ru-en en- Table 2 : BLEU scores can be significantly improved with good label smoothing recipes. The first row of numbers corresponds to using only the cross entropy criterion for training. The second row of numbers corresponds to the Transformer baselines. The last row contains scores obtained with our best hyperparameters. in ). However, the solid curve for LM drops quickly as beam size increases (see Stahlberg and Byrne (2019) for more insight). A possible explanation is that models smoothed with LMs generate search spaces that are richer in probability variations and more diversified, compared to e.g. uniform label smoothing. As search becomes stronger, hypotheses that have higher probabilities, but not necessarily closer to the true targets, are found. This suggests that the mismatch in development set perplexity and test BLEU is a complex phenomenon and calls for more analysis.", "cite_spans": [ { "start": 720, "end": 746, "text": "Stahlberg and Byrne (2019)", "ref_id": "BIBREF38" } ], "ref_spans": [ { "start": 88, "end": 96, "text": "Figure 7", "ref_id": "FIGREF8" }, { "start": 242, "end": 250, "text": "Figure 8", "ref_id": "FIGREF9" }, { "start": 323, "end": 330, "text": "Table 2", "ref_id": null } ], "eq_spans": [], "section": "Search", "sec_num": "5.1" }, { "text": "We further examine test BLEU with respect to development (dev) BLEU and dev perplexity. As shown in Figure 8a , test BLEU is nicely correlated with dev BLEU, indicating that there is no mismatch between dev and test in the dataset itself. However, as in Figure 8b , although test BLEU increases with a decreasing dev perplexity, in regions of low dev perplexities, there exist many systems with very different test performances ranging from 39.3 BLEU to 41.5 BLEU. Despite perplexity being directly related to the cross entropy training criterion, this is an example where it fails to be a good proxy for the final BLEU metric. Against this mismatch between training and testing, either a more BLEU-related dev score or a more perplexityrelated test metric needs to be considered. ", "cite_spans": [], "ref_spans": [ { "start": 100, "end": 109, "text": "Figure 8a", "ref_id": "FIGREF9" }, { "start": 254, "end": 263, "text": "Figure 8b", "ref_id": "FIGREF9" } ], "eq_spans": [], "section": "Scoring", "sec_num": "5.2" }, { "text": "In this work, we investigate label smoothing in neural machine translation. Considering important aspects in label smoothing: token selection, probability mass and prior distribution, we introduce a generalized formula and derive theoretical solutions to the training problem. Examining the effect of various hyperparameter choices, practically we show that with a good label smoothing recipe, one can obtain consistent improvements over strong baselines. Delving into search and scoring, we finally emphasize the mismatch between training and testing, and motivate future research. Reassuring that label smoothing brings concrete improvements and considering that it only operates at the output side of the model, our next step is to explore similar smoothing ideas at the input side.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "6" }, { "text": "Assuming r only depends on n, v and not qv. In the latter case, one needs to solve the optimization problem ignoring the outer summation and reusing the Lagrange multiplier.3 For an intuitive understanding, consider the case when two sentence pairs have the exact same context up to a certain target position but the next tokens are different (e.g. \"Danke .\" in German being translated to \"Thank you .\" and \"Thank you very much .\" in English, the period in the first translation and \"very\" in the second translation have the same context.)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "This work has received funding from the European Research Council (ERC) (under the European Union's Horizon 2020 research and innovation programme, grant agreement No 694537, project \"SE-QCLAS\") and the Deutsche Forschungsgemeinschaft (DFG; grant agreement NE 572/8-1, project \"CoreTec\"). The GPU computing cluster was supported by DFG (Deutsche Forschungsgemeinschaft) under grant INST 222/1168-1 FUGG.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgements", "sec_num": null }, { "text": "Ignoring the outer summation in tokens and dropping the dependencies on n for simplicity, the optimization problem in Equation 8 can be solved analytically and the optimization problem in Equation 10 can be solved numerically.takes the form of x p log q, where both p and q are probability distributions in x. The divergence inequality can be directly applied:Alternatively, one can use the Lagrange multiplier and calculate first order derivatives:Afterwards, set them to zero and solve for \u03bb:Plugging \u03bb back in yield q v , which should be further checked to see if it is a maxima or minima.In both methods, the minimum is obtained when:Applying the Lagrange multiplier, the first order derivatives can be derived:Note that setting \u2202L CFD n \u2202qv to zero results in a transcendental equation in the form of:Consider that the Lambert W function is the inverse function of:we can rewrite the transcendental equation until we reach a similar form:reversing the variable replacements:Finally, plugging in A, B and C, we arrive at Equation 11:When p is a one hot distribution and m is given, one can use the constraint of q v being a probability distribution to numerically solve for \u03bb. Once \u03bb is obtained, actual values ofq CFD v can be calculated.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A Derivation of Optimal Solutions", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Sparse label smoothing regularization for person reidentification", "authors": [ { "first": "J", "middle": [], "last": "Ainam", "suffix": "" }, { "first": "K", "middle": [], "last": "Qin", "suffix": "" }, { "first": "G", "middle": [], "last": "Liu", "suffix": "" }, { "first": "G", "middle": [], "last": "Luo", "suffix": "" } ], "year": 2019, "venue": "IEEE Access", "volume": "7", "issue": "", "pages": "27899--27910", "other_ids": { "DOI": [ "10.1109/ACCESS.2019.2901599" ] }, "num": null, "urls": [], "raw_text": "J. Ainam, K. Qin, G. Liu, and G. Luo. 2019. Sparse label smoothing regularization for person re- identification. IEEE Access, 7:27899-27910.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Neural machine translation by jointly learning to align and translate", "authors": [ { "first": "Dzmitry", "middle": [], "last": "Bahdanau", "suffix": "" }, { "first": "Kyunghyun", "middle": [], "last": "Cho", "suffix": "" }, { "first": "Yoshua", "middle": [], "last": "Bengio", "suffix": "" } ], "year": 2015, "venue": "3rd International Conference on Learning Representations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Ben- gio. 2015. Neural machine translation by jointly learning to align and translate. In 3rd Inter- national Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Findings of the 2019 conference on machine translation (WMT19)", "authors": [ { "first": "Lo\u00efc", "middle": [], "last": "Barrault", "suffix": "" }, { "first": "Ond\u0159ej", "middle": [], "last": "Bojar", "suffix": "" }, { "first": "Marta", "middle": [ "R" ], "last": "Costa-Juss\u00e0", "suffix": "" }, { "first": "Christian", "middle": [], "last": "Federmann", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Fishel", "suffix": "" }, { "first": "Yvette", "middle": [], "last": "Graham", "suffix": "" }, { "first": "Barry", "middle": [], "last": "Haddow", "suffix": "" }, { "first": "Matthias", "middle": [], "last": "Huck", "suffix": "" }, { "first": "Philipp", "middle": [], "last": "Koehn", "suffix": "" }, { "first": "Shervin", "middle": [], "last": "Malmasi", "suffix": "" }, { "first": "Christof", "middle": [], "last": "Monz", "suffix": "" }, { "first": "Mathias", "middle": [], "last": "M\u00fcller", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the Fourth Conference on Machine Translation", "volume": "2", "issue": "", "pages": "1--61", "other_ids": { "DOI": [ "10.18653/v1/W19-5301" ] }, "num": null, "urls": [], "raw_text": "Lo\u00efc Barrault, Ond\u0159ej Bojar, Marta R. Costa-juss\u00e0, Christian Federmann, Mark Fishel, Yvette Gra- ham, Barry Haddow, Matthias Huck, Philipp Koehn, Shervin Malmasi, Christof Monz, Mathias M\u00fcller, Santanu Pal, Matt Post, and Marcos Zampieri. 2019. Findings of the 2019 conference on machine transla- tion (WMT19). In Proceedings of the Fourth Con- ference on Machine Translation (Volume 2: Shared Task Papers, Day 1), pages 1-61, Florence, Italy. As- sociation for Computational Linguistics.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "A neural probabilistic language model", "authors": [ { "first": "Yoshua", "middle": [], "last": "Bengio", "suffix": "" }, { "first": "R\u00e9jean", "middle": [], "last": "Ducharme", "suffix": "" }, { "first": "Pascal", "middle": [], "last": "Vincent", "suffix": "" } ], "year": 2001, "venue": "Advances in Neural Information Processing Systems 13", "volume": "", "issue": "", "pages": "932--938", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yoshua Bengio, R\u00e9jean Ducharme, and Pascal Vincent. 2001. A neural probabilistic language model. In T. K. Leen, T. G. Dietterich, and V. Tresp, editors, Advances in Neural Information Processing Systems 13, pages 932-938. MIT Press.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Multi-agent learning for neural machine translation", "authors": [ { "first": "", "middle": [], "last": "Bi", "suffix": "" }, { "first": "Zhongjun", "middle": [], "last": "Hao Xiong", "suffix": "" }, { "first": "Hua", "middle": [], "last": "He", "suffix": "" }, { "first": "Haifeng", "middle": [], "last": "Wu", "suffix": "" }, { "first": "", "middle": [], "last": "Wang", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", "volume": "", "issue": "", "pages": "856--865", "other_ids": { "DOI": [ "10.18653/v1/D19-1079" ] }, "num": null, "urls": [], "raw_text": "tianchi Bi, hao xiong, Zhongjun He, Hua Wu, and Haifeng Wang. 2019. Multi-agent learning for neu- ral machine translation. In Proceedings of the 2019 Conference on Empirical Methods in Natu- ral Language Processing and the 9th International Joint Conference on Natural Language Process- ing (EMNLP-IJCNLP), pages 856-865, Hong Kong, China. Association for Computational Linguistics.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Findings of the 2017 conference on machine translation (WMT17)", "authors": [ { "first": "Ond\u0159ej", "middle": [], "last": "Bojar", "suffix": "" }, { "first": "Rajen", "middle": [], "last": "Chatterjee", "suffix": "" }, { "first": "Christian", "middle": [], "last": "Federmann", "suffix": "" }, { "first": "Yvette", "middle": [], "last": "Graham", "suffix": "" }, { "first": "Barry", "middle": [], "last": "Haddow", "suffix": "" }, { "first": "Shujian", "middle": [], "last": "Huang", "suffix": "" }, { "first": "Matthias", "middle": [], "last": "Huck", "suffix": "" }, { "first": "Philipp", "middle": [], "last": "Koehn", "suffix": "" }, { "first": "Qun", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Varvara", "middle": [], "last": "Logacheva", "suffix": "" }, { "first": "Christof", "middle": [], "last": "Monz", "suffix": "" }, { "first": "Matteo", "middle": [], "last": "Negri", "suffix": "" }, { "first": "Matt", "middle": [], "last": "Post", "suffix": "" }, { "first": "Raphael", "middle": [], "last": "Rubino", "suffix": "" }, { "first": "Lucia", "middle": [], "last": "Specia", "suffix": "" }, { "first": "Marco", "middle": [], "last": "Turchi", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the Second Conference on Machine Translation", "volume": "", "issue": "", "pages": "169--214", "other_ids": { "DOI": [ "10.18653/v1/W17-4717" ] }, "num": null, "urls": [], "raw_text": "Ond\u0159ej Bojar, Rajen Chatterjee, Christian Federmann, Yvette Graham, Barry Haddow, Shujian Huang, Matthias Huck, Philipp Koehn, Qun Liu, Varvara Lo- gacheva, Christof Monz, Matteo Negri, Matt Post, Raphael Rubino, Lucia Specia, and Marco Turchi. 2017. Findings of the 2017 conference on machine translation (WMT17). In Proceedings of the Sec- ond Conference on Machine Translation, pages 169- 214, Copenhagen, Denmark. Association for Com- putational Linguistics.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Findings of the 2016 conference on machine translation", "authors": [ { "first": "Ond\u0159ej", "middle": [], "last": "Bojar", "suffix": "" }, { "first": "Rajen", "middle": [], "last": "Chatterjee", "suffix": "" }, { "first": "Christian", "middle": [], "last": "Federmann", "suffix": "" }, { "first": "Yvette", "middle": [], "last": "Graham", "suffix": "" }, { "first": "Barry", "middle": [], "last": "Haddow", "suffix": "" }, { "first": "Matthias", "middle": [], "last": "Huck", "suffix": "" }, { "first": "Antonio", "middle": [ "Jimeno" ], "last": "Yepes", "suffix": "" }, { "first": "Philipp", "middle": [], "last": "Koehn", "suffix": "" }, { "first": "Varvara", "middle": [], "last": "Logacheva", "suffix": "" }, { "first": "Christof", "middle": [], "last": "Monz", "suffix": "" }, { "first": "Matteo", "middle": [], "last": "Negri", "suffix": "" }, { "first": "Aur\u00e9lie", "middle": [], "last": "N\u00e9v\u00e9ol", "suffix": "" }, { "first": "Mariana", "middle": [], "last": "Neves", "suffix": "" }, { "first": "Martin", "middle": [], "last": "Popel", "suffix": "" }, { "first": "Matt", "middle": [], "last": "Post", "suffix": "" }, { "first": "Raphael", "middle": [], "last": "Rubino", "suffix": "" }, { "first": "Carolina", "middle": [], "last": "Scarton", "suffix": "" }, { "first": "Lucia", "middle": [], "last": "Specia", "suffix": "" }, { "first": "Marco", "middle": [], "last": "Turchi", "suffix": "" }, { "first": "Karin", "middle": [], "last": "Verspoor", "suffix": "" }, { "first": "Marcos", "middle": [], "last": "Zampieri", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the First Conference on Machine Translation", "volume": "2", "issue": "", "pages": "131--198", "other_ids": { "DOI": [ "10.18653/v1/W16-2301" ] }, "num": null, "urls": [], "raw_text": "Ond\u0159ej Bojar, Rajen Chatterjee, Christian Federmann, Yvette Graham, Barry Haddow, Matthias Huck, An- tonio Jimeno Yepes, Philipp Koehn, Varvara Lo- gacheva, Christof Monz, Matteo Negri, Aur\u00e9lie N\u00e9v\u00e9ol, Mariana Neves, Martin Popel, Matt Post, Raphael Rubino, Carolina Scarton, Lucia Spe- cia, Marco Turchi, Karin Verspoor, and Marcos Zampieri. 2016. Findings of the 2016 conference on machine translation. In Proceedings of the First Conference on Machine Translation: Volume 2, Shared Task Papers, pages 131-198, Berlin, Ger- many. Association for Computational Linguistics.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Findings of the 2018 conference on machine translation (WMT18)", "authors": [ { "first": "Ond\u0159ej", "middle": [], "last": "Bojar", "suffix": "" }, { "first": "Christian", "middle": [], "last": "Federmann", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Fishel", "suffix": "" }, { "first": "Yvette", "middle": [], "last": "Graham", "suffix": "" }, { "first": "Barry", "middle": [], "last": "Haddow", "suffix": "" }, { "first": "Philipp", "middle": [], "last": "Koehn", "suffix": "" }, { "first": "Christof", "middle": [], "last": "Monz", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the Third Conference on Machine Translation: Shared Task Papers", "volume": "", "issue": "", "pages": "272--303", "other_ids": { "DOI": [ "10.18653/v1/W18-6401" ] }, "num": null, "urls": [], "raw_text": "Ond\u0159ej Bojar, Christian Federmann, Mark Fishel, Yvette Graham, Barry Haddow, Philipp Koehn, and Christof Monz. 2018. Findings of the 2018 con- ference on machine translation (WMT18). In Pro- ceedings of the Third Conference on Machine Trans- lation: Shared Task Papers, pages 272-303, Bel- gium, Brussels. Association for Computational Lin- guistics.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "An empirical study of smoothing techniques for language modeling", "authors": [ { "first": "F", "middle": [], "last": "Stanley", "suffix": "" }, { "first": "Joshua", "middle": [], "last": "Chen", "suffix": "" }, { "first": "", "middle": [], "last": "Goodman", "suffix": "" } ], "year": 1996, "venue": "34th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "310--318", "other_ids": { "DOI": [ "10.3115/981863.981904" ] }, "num": null, "urls": [], "raw_text": "Stanley F. Chen and Joshua Goodman. 1996. An em- pirical study of smoothing techniques for language modeling. In 34th Annual Meeting of the Associa- tion for Computational Linguistics, pages 310-318, Santa Cruz, California, USA. Association for Com- putational Linguistics.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "State-of-the-art speech recognition with sequence-to-sequence models", "authors": [ { "first": "C", "middle": [], "last": "Chiu", "suffix": "" }, { "first": "T", "middle": [ "N" ], "last": "Sainath", "suffix": "" }, { "first": "Y", "middle": [], "last": "Wu", "suffix": "" }, { "first": "R", "middle": [], "last": "Prabhavalkar", "suffix": "" }, { "first": "P", "middle": [], "last": "Nguyen", "suffix": "" }, { "first": "Z", "middle": [], "last": "Chen", "suffix": "" }, { "first": "A", "middle": [], "last": "Kannan", "suffix": "" }, { "first": "R", "middle": [ "J" ], "last": "Weiss", "suffix": "" }, { "first": "K", "middle": [], "last": "Rao", "suffix": "" }, { "first": "E", "middle": [], "last": "Gonina", "suffix": "" }, { "first": "N", "middle": [], "last": "Jaitly", "suffix": "" }, { "first": "B", "middle": [], "last": "Li", "suffix": "" }, { "first": "J", "middle": [], "last": "Chorowski", "suffix": "" }, { "first": "M", "middle": [], "last": "Bacchiani", "suffix": "" } ], "year": 2018, "venue": "2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)", "volume": "", "issue": "", "pages": "4774--4778", "other_ids": { "DOI": [ "10.1109/ICASSP.2018.8462105" ] }, "num": null, "urls": [], "raw_text": "C. Chiu, T. N. Sainath, Y. Wu, R. Prabhavalkar, P. Nguyen, Z. Chen, A. Kannan, R. J. Weiss, K. Rao, E. Gonina, N. Jaitly, B. Li, J. Chorowski, and M. Bacchiani. 2018. State-of-the-art speech recognition with sequence-to-sequence models. In 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 4774-4778.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Learning phrase representations using RNN encoder-decoder for statistical machine translation", "authors": [ { "first": "Kyunghyun", "middle": [], "last": "Cho", "suffix": "" }, { "first": "Bart", "middle": [], "last": "Van Merri\u00ebnboer", "suffix": "" }, { "first": "Caglar", "middle": [], "last": "Gulcehre", "suffix": "" }, { "first": "Dzmitry", "middle": [], "last": "Bahdanau", "suffix": "" }, { "first": "Fethi", "middle": [], "last": "Bougares", "suffix": "" }, { "first": "Holger", "middle": [], "last": "Schwenk", "suffix": "" }, { "first": "Yoshua", "middle": [], "last": "Bengio", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)", "volume": "", "issue": "", "pages": "1724--1734", "other_ids": { "DOI": [ "10.3115/v1/D14-1179" ] }, "num": null, "urls": [], "raw_text": "Kyunghyun Cho, Bart van Merri\u00ebnboer, Caglar Gul- cehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using RNN encoder-decoder for statistical machine translation. In Proceedings of the 2014 Conference on Empirical Methods in Nat- ural Language Processing (EMNLP), pages 1724- 1734, Doha, Qatar. Association for Computational Linguistics.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "A comparison of the enhanced Good-Turing and deleted estimation methods for estimating probabilities of English bigrams", "authors": [ { "first": "W", "middle": [], "last": "Kenneth", "suffix": "" }, { "first": "William", "middle": [ "A" ], "last": "Church", "suffix": "" }, { "first": "", "middle": [], "last": "Gale", "suffix": "" } ], "year": 1991, "venue": "Computer Speech and Language", "volume": "5", "issue": "", "pages": "19--54", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kenneth W. Church and William A. Gale. 1991. A comparison of the enhanced Good-Turing and deleted estimation methods for estimating probabili- ties of English bigrams. Computer Speech and Lan- guage, 5:19-54.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "On the lambertw function", "authors": [ { "first": "M", "middle": [], "last": "Robert", "suffix": "" }, { "first": "", "middle": [], "last": "Corless", "suffix": "" }, { "first": "H", "middle": [], "last": "Gaston", "suffix": "" }, { "first": "", "middle": [], "last": "Gonnet", "suffix": "" }, { "first": "E", "middle": [ "G" ], "last": "David", "suffix": "" }, { "first": "", "middle": [], "last": "Hare", "suffix": "" }, { "first": "J", "middle": [], "last": "David", "suffix": "" }, { "first": "Donald", "middle": [ "E" ], "last": "Jeffrey", "suffix": "" }, { "first": "", "middle": [], "last": "Knuth", "suffix": "" } ], "year": 1996, "venue": "Advances in Computational mathematics", "volume": "5", "issue": "1", "pages": "329--359", "other_ids": {}, "num": null, "urls": [], "raw_text": "Robert M Corless, Gaston H Gonnet, David EG Hare, David J Jeffrey, and Donald E Knuth. 1996. On the lambertw function. Advances in Computational mathematics, 5(1):329-359.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "BERT: Pre-training of deep bidirectional transformers for language understanding", "authors": [ { "first": "Jacob", "middle": [], "last": "Devlin", "suffix": "" }, { "first": "Ming-Wei", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Kenton", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Kristina", "middle": [], "last": "Toutanova", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "4171--4186", "other_ids": { "DOI": [ "10.18653/v1/N19-1423" ] }, "num": null, "urls": [], "raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Associ- ation for Computational Linguistics.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Understanding back-translation at scale", "authors": [ { "first": "Sergey", "middle": [], "last": "Edunov", "suffix": "" }, { "first": "Myle", "middle": [], "last": "Ott", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Auli", "suffix": "" }, { "first": "David", "middle": [], "last": "Grangier", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "489--500", "other_ids": { "DOI": [ "10.18653/v1/D18-1045" ] }, "num": null, "urls": [], "raw_text": "Sergey Edunov, Myle Ott, Michael Auli, and David Grangier. 2018. Understanding back-translation at scale. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 489-500, Brussels, Belgium. Association for Computational Linguistics.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Soft contextual data augmentation for neural machine translation", "authors": [ { "first": "Fei", "middle": [], "last": "Gao", "suffix": "" }, { "first": "Jinhua", "middle": [], "last": "Zhu", "suffix": "" }, { "first": "Lijun", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Yingce", "middle": [], "last": "Xia", "suffix": "" }, { "first": "Tao", "middle": [], "last": "Qin", "suffix": "" }, { "first": "Xueqi", "middle": [], "last": "Cheng", "suffix": "" }, { "first": "Wengang", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "Tie-Yan", "middle": [], "last": "Liu", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "5539--5544", "other_ids": { "DOI": [ "10.18653/v1/P19-1555" ] }, "num": null, "urls": [], "raw_text": "Fei Gao, Jinhua Zhu, Lijun Wu, Yingce Xia, Tao Qin, Xueqi Cheng, Wengang Zhou, and Tie-Yan Liu. 2019. Soft contextual data augmentation for neural machine translation. In Proceedings of the 57th An- nual Meeting of the Association for Computational Linguistics, pages 5539-5544, Florence, Italy. Asso- ciation for Computational Linguistics.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "A convolutional encoder model for neural machine translation", "authors": [ { "first": "Jonas", "middle": [], "last": "Gehring", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Auli", "suffix": "" }, { "first": "David", "middle": [], "last": "Grangier", "suffix": "" }, { "first": "Yann", "middle": [], "last": "Dauphin", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "123--135", "other_ids": { "DOI": [ "10.18653/v1/P17-1012" ] }, "num": null, "urls": [], "raw_text": "Jonas Gehring, Michael Auli, David Grangier, and Yann Dauphin. 2017a. A convolutional encoder model for neural machine translation. In Proceed- ings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Pa- pers), pages 123-135, Vancouver, Canada. Associa- tion for Computational Linguistics.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Convolutional sequence to sequence learning", "authors": [ { "first": "Jonas", "middle": [], "last": "Gehring", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Auli", "suffix": "" }, { "first": "David", "middle": [], "last": "Grangier", "suffix": "" }, { "first": "Denis", "middle": [], "last": "Yarats", "suffix": "" }, { "first": "Yann", "middle": [ "N" ], "last": "Dauphin", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 34th International Conference on Machine Learning", "volume": "70", "issue": "", "pages": "1243--1252", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jonas Gehring, Michael Auli, David Grangier, Denis Yarats, and Yann N. Dauphin. 2017b. Convolutional sequence to sequence learning. In Proceedings of the 34th International Conference on Machine Learning, volume 70 of Proceedings of Machine Learning Research, pages 1243-1252, International Convention Centre, Sydney, Australia. PMLR.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Distilling the knowledge in a neural network", "authors": [ { "first": "Geoffrey", "middle": [ "E" ], "last": "Hinton", "suffix": "" }, { "first": "Oriol", "middle": [], "last": "Vinyals", "suffix": "" }, { "first": "Jeffrey", "middle": [], "last": "Dean", "suffix": "" } ], "year": 2015, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Geoffrey E. Hinton, Oriol Vinyals, and Jeffrey Dean. 2015. Distilling the knowledge in a neural network. CoRR, abs/1503.02531.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Interpolated estimation of Markov source parameters from sparse data", "authors": [ { "first": "Fred", "middle": [], "last": "Jelinek", "suffix": "" }, { "first": "Robert", "middle": [ "L" ], "last": "Mercer", "suffix": "" } ], "year": 1980, "venue": "Proceedings, Workshop on Pattern Recognition in Practice", "volume": "", "issue": "", "pages": "381--397", "other_ids": {}, "num": null, "urls": [], "raw_text": "Fred Jelinek and Robert L. Mercer. 1980. Interpolated estimation of Markov source parameters from sparse data. In Edzard S. Gelsema and Laveen N. Kanal, editors, Proceedings, Workshop on Pattern Recog- nition in Practice, pages 381-397. North Holland, Amsterdam.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Estimation of probabilities from sparse data for the language model component of a speech recognizer", "authors": [ { "first": "M", "middle": [], "last": "Slava", "suffix": "" }, { "first": "", "middle": [], "last": "Katz", "suffix": "" } ], "year": 1987, "venue": "IEEE Trans. Acoustics, Speech, and Signal Processing", "volume": "35", "issue": "", "pages": "400--401", "other_ids": {}, "num": null, "urls": [], "raw_text": "Slava M. Katz. 1987. Estimation of probabilities from sparse data for the language model component of a speech recognizer. IEEE Trans. Acoustics, Speech, and Signal Processing, 35:400-401.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Sequencelevel knowledge distillation", "authors": [ { "first": "Yoon", "middle": [], "last": "Kim", "suffix": "" }, { "first": "Alexander", "middle": [ "M" ], "last": "Rush", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "1317--1327", "other_ids": { "DOI": [ "10.18653/v1/D16-1139" ] }, "num": null, "urls": [], "raw_text": "Yoon Kim and Alexander M. Rush. 2016. Sequence- level knowledge distillation. In Proceedings of the 2016 Conference on Empirical Methods in Natu- ral Language Processing, pages 1317-1327, Austin, Texas. Association for Computational Linguistics.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Improved backing-off for m-gram language modeling", "authors": [ { "first": "Reinhard", "middle": [], "last": "Kneser", "suffix": "" }, { "first": "Hermann", "middle": [], "last": "Ney", "suffix": "" } ], "year": 1995, "venue": "IEEE International Conference on Acoustics, Speech, and Signal Processing", "volume": "", "issue": "", "pages": "181--184", "other_ids": {}, "num": null, "urls": [], "raw_text": "Reinhard Kneser and Hermann Ney. 1995. Im- proved backing-off for m-gram language model- ing. In IEEE International Conference on Acoustics, Speech, and Signal Processing, pages 181-184, De- troit, Michigan, USA.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Effective approaches to attention-based neural machine translation", "authors": [ { "first": "Thang", "middle": [], "last": "Luong", "suffix": "" }, { "first": "Hieu", "middle": [], "last": "Pham", "suffix": "" }, { "first": "Christopher", "middle": [ "D" ], "last": "Manning", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "1412--1421", "other_ids": { "DOI": [ "10.18653/v1/D15-1166" ] }, "num": null, "urls": [], "raw_text": "Thang Luong, Hieu Pham, and Christopher D. Man- ning. 2015. Effective approaches to attention-based neural machine translation. In Proceedings of the 2015 Conference on Empirical Methods in Natu- ral Language Processing, pages 1412-1421, Lis- bon, Portugal. Association for Computational Lin- guistics.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Efficient estimation of word representations in vector space", "authors": [ { "first": "Tomas", "middle": [], "last": "Mikolov", "suffix": "" }, { "first": "Kai", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Gregory", "middle": [ "S" ], "last": "Corrado", "suffix": "" }, { "first": "Jeffrey", "middle": [], "last": "Dean", "suffix": "" } ], "year": 2013, "venue": "CoRR", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tomas Mikolov, Kai Chen, Gregory S. Corrado, and Jeffrey Dean. 2013a. Efficient estimation of word representations in vector space. CoRR, abs/1301.3781.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Distributed representations of words and phrases and their compositionality", "authors": [ { "first": "Tomas", "middle": [], "last": "Mikolov", "suffix": "" }, { "first": "Ilya", "middle": [], "last": "Sutskever", "suffix": "" }, { "first": "Kai", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Greg", "middle": [], "last": "Corrado", "suffix": "" }, { "first": "Jeffrey", "middle": [], "last": "Dean", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the 26th International Conference on Neural Information Processing Systems", "volume": "2", "issue": "", "pages": "3111--3119", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg Cor- rado, and Jeffrey Dean. 2013b. Distributed repre- sentations of words and phrases and their composi- tionality. In Proceedings of the 26th International Conference on Neural Information Processing Sys- tems -Volume 2, NIPS'13, pages 3111-3119, USA. Curran Associates Inc.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "When does label smoothing help? CoRR", "authors": [ { "first": "Rafael", "middle": [], "last": "M\u00fcller", "suffix": "" }, { "first": "Simon", "middle": [], "last": "Kornblith", "suffix": "" }, { "first": "Geoffrey", "middle": [ "E" ], "last": "", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rafael M\u00fcller, Simon Kornblith, and Geoffrey E. Hin- ton. 2019. When does label smoothing help? CoRR, abs/1906.02629.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "fairseq: A fast, extensible toolkit for sequence modeling", "authors": [ { "first": "Myle", "middle": [], "last": "Ott", "suffix": "" }, { "first": "Sergey", "middle": [], "last": "Edunov", "suffix": "" }, { "first": "Alexei", "middle": [], "last": "Baevski", "suffix": "" }, { "first": "Angela", "middle": [], "last": "Fan", "suffix": "" }, { "first": "Sam", "middle": [], "last": "Gross", "suffix": "" }, { "first": "Nathan", "middle": [], "last": "Ng", "suffix": "" }, { "first": "David", "middle": [], "last": "Grangier", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Auli", "suffix": "" } ], "year": 2019, "venue": "Proceedings of NAACL-HLT 2019: Demonstrations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Myle Ott, Sergey Edunov, Alexei Baevski, Angela Fan, Sam Gross, Nathan Ng, David Grangier, and Michael Auli. 2019. fairseq: A fast, extensible toolkit for sequence modeling. In Proceedings of NAACL-HLT 2019: Demonstrations.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Scaling neural machine translation", "authors": [ { "first": "Myle", "middle": [], "last": "Ott", "suffix": "" }, { "first": "Sergey", "middle": [], "last": "Edunov", "suffix": "" }, { "first": "David", "middle": [], "last": "Grangier", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Auli", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the Third Conference on Machine Translation: Research Papers", "volume": "", "issue": "", "pages": "1--9", "other_ids": { "DOI": [ "10.18653/v1/W18-6301" ] }, "num": null, "urls": [], "raw_text": "Myle Ott, Sergey Edunov, David Grangier, and Michael Auli. 2018. Scaling neural machine trans- lation. In Proceedings of the Third Conference on Machine Translation: Research Papers, pages 1-9, Brussels, Belgium. Association for Computational Linguistics.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "Bleu: a method for automatic evaluation of machine translation", "authors": [ { "first": "Kishore", "middle": [], "last": "Papineni", "suffix": "" }, { "first": "Salim", "middle": [], "last": "Roukos", "suffix": "" }, { "first": "Todd", "middle": [], "last": "Ward", "suffix": "" }, { "first": "Wei-Jing", "middle": [], "last": "Zhu", "suffix": "" } ], "year": 2002, "venue": "Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "311--318", "other_ids": { "DOI": [ "10.3115/1073083.1073135" ] }, "num": null, "urls": [], "raw_text": "Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. Bleu: a method for automatic eval- uation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Com- putational Linguistics, pages 311-318, Philadelphia, Pennsylvania, USA. Association for Computational Linguistics.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "Regularizing neural networks by penalizing confident output distributions", "authors": [ { "first": "Gabriel", "middle": [], "last": "Pereyra", "suffix": "" }, { "first": "George", "middle": [], "last": "Tucker", "suffix": "" }, { "first": "Jan", "middle": [], "last": "Chorowski", "suffix": "" }, { "first": "Lukasz", "middle": [], "last": "Kaiser", "suffix": "" }, { "first": "Geoffrey", "middle": [ "E" ], "last": "Hinton", "suffix": "" } ], "year": 2017, "venue": "5th International Conference on Learning Representations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Gabriel Pereyra, George Tucker, Jan Chorowski, Lukasz Kaiser, and Geoffrey E. Hinton. 2017. Regu- larizing neural networks by penalizing confident out- put distributions. In 5th International Conference on Learning Representations, ICLR 2017, Toulon, France, April 24-26, 2017, Workshop Track Proceed- ings.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "An overview of multitask learning in deep neural networks", "authors": [ { "first": "Sebastian", "middle": [], "last": "Ruder", "suffix": "" } ], "year": 2017, "venue": "ArXiv", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sebastian Ruder. 2017. An overview of multi- task learning in deep neural networks. ArXiv, abs/1706.05098.", "links": null }, "BIBREF32": { "ref_id": "b32", "title": "Improved techniques for training gans", "authors": [ { "first": "Tim", "middle": [], "last": "Salimans", "suffix": "" }, { "first": "Ian", "middle": [], "last": "Goodfellow", "suffix": "" }, { "first": "Wojciech", "middle": [], "last": "Zaremba", "suffix": "" }, { "first": "Vicki", "middle": [], "last": "Cheung", "suffix": "" }, { "first": "Alec", "middle": [], "last": "Radford", "suffix": "" }, { "first": "Xi", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Xi", "middle": [], "last": "Chen", "suffix": "" } ], "year": 2016, "venue": "Advances in Neural Information Processing Systems", "volume": "29", "issue": "", "pages": "2234--2242", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tim Salimans, Ian Goodfellow, Wojciech Zaremba, Vicki Cheung, Alec Radford, Xi Chen, and Xi Chen. 2016. Improved techniques for training gans. In D. D. Lee, M. Sugiyama, U. V. Luxburg, I. Guyon, and R. Garnett, editors, Advances in Neural Informa- tion Processing Systems 29, pages 2234-2242. Cur- ran Associates, Inc.", "links": null }, "BIBREF33": { "ref_id": "b33", "title": "Continuous space language models", "authors": [ { "first": "Holger", "middle": [], "last": "Schwenk", "suffix": "" } ], "year": 2007, "venue": "Comput. Speech Lang", "volume": "21", "issue": "3", "pages": "492--518", "other_ids": { "DOI": [ "10.1016/j.csl.2006.09.003" ] }, "num": null, "urls": [], "raw_text": "Holger Schwenk. 2007. Continuous space language models. Comput. Speech Lang., 21(3):492-518.", "links": null }, "BIBREF34": { "ref_id": "b34", "title": "Continuous space language models for statistical machine translation", "authors": [ { "first": "Holger", "middle": [], "last": "Schwenk", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Dechelotte", "suffix": "" }, { "first": "Jean-Luc", "middle": [], "last": "Gauvain", "suffix": "" } ], "year": 2006, "venue": "Proceedings of the COLING/ACL 2006 Main Conference Poster Sessions", "volume": "", "issue": "", "pages": "723--730", "other_ids": {}, "num": null, "urls": [], "raw_text": "Holger Schwenk, Daniel Dechelotte, and Jean-Luc Gauvain. 2006. Continuous space language models for statistical machine translation. In Proceedings of the COLING/ACL 2006 Main Conference Poster Sessions, pages 723-730, Sydney, Australia. Associ- ation for Computational Linguistics.", "links": null }, "BIBREF35": { "ref_id": "b35", "title": "Neural machine translation of rare words with subword units", "authors": [ { "first": "Rico", "middle": [], "last": "Sennrich", "suffix": "" }, { "first": "Barry", "middle": [], "last": "Haddow", "suffix": "" }, { "first": "Alexandra", "middle": [], "last": "Birch", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "1715--1725", "other_ids": { "DOI": [ "10.18653/v1/P16-1162" ] }, "num": null, "urls": [], "raw_text": "Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Neural machine translation of rare words with subword units. In Proceedings of the 54th An- nual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1715- 1725, Berlin, Germany. Association for Computa- tional Linguistics.", "links": null }, "BIBREF36": { "ref_id": "b36", "title": "A mathematical theory of communication", "authors": [ { "first": "Claude", "middle": [ "E" ], "last": "Shannon", "suffix": "" } ], "year": 1948, "venue": "The Bell System Technical Journal", "volume": "27", "issue": "", "pages": "623--656", "other_ids": {}, "num": null, "urls": [], "raw_text": "Claude E. Shannon. 1948. A mathematical theory of communication. The Bell System Technical Journal, 27:379-423, 623-656.", "links": null }, "BIBREF37": { "ref_id": "b37", "title": "Dropout: A simple way to prevent neural networks from overfitting", "authors": [ { "first": "Nitish", "middle": [], "last": "Srivastava", "suffix": "" }, { "first": "Geoffrey", "middle": [], "last": "Hinton", "suffix": "" }, { "first": "Alex", "middle": [], "last": "Krizhevsky", "suffix": "" }, { "first": "Ilya", "middle": [], "last": "Sutskever", "suffix": "" }, { "first": "Ruslan", "middle": [], "last": "Salakhutdinov", "suffix": "" } ], "year": 2014, "venue": "Journal of Machine Learning Research", "volume": "15", "issue": "", "pages": "1929--1958", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: A simple way to prevent neural networks from overfitting. Journal of Machine Learning Re- search, 15:1929-1958.", "links": null }, "BIBREF38": { "ref_id": "b38", "title": "On NMT search errors and model errors: Cat got your tongue?", "authors": [ { "first": "Felix", "middle": [], "last": "Stahlberg", "suffix": "" }, { "first": "Bill", "middle": [], "last": "Byrne", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", "volume": "", "issue": "", "pages": "3356--3362", "other_ids": { "DOI": [ "10.18653/v1/D19-1331" ] }, "num": null, "urls": [], "raw_text": "Felix Stahlberg and Bill Byrne. 2019. On NMT search errors and model errors: Cat got your tongue? In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Lan- guage Processing (EMNLP-IJCNLP), pages 3356- 3362, Hong Kong, China. Association for Computa- tional Linguistics.", "links": null }, "BIBREF39": { "ref_id": "b39", "title": "Lstm neural networks for language modeling", "authors": [ { "first": "Martin", "middle": [], "last": "Sundermeyer", "suffix": "" }, { "first": "Ralf", "middle": [], "last": "Schl\u00fcter", "suffix": "" }, { "first": "Hermann", "middle": [], "last": "Ney", "suffix": "" } ], "year": 2012, "venue": "Interspeech", "volume": "", "issue": "", "pages": "194--197", "other_ids": {}, "num": null, "urls": [], "raw_text": "Martin Sundermeyer, Ralf Schl\u00fcter, and Hermann Ney. 2012. Lstm neural networks for language modeling. In Interspeech, pages 194-197, Portland, OR, USA.", "links": null }, "BIBREF40": { "ref_id": "b40", "title": "Sequence to sequence learning with neural networks", "authors": [ { "first": "Ilya", "middle": [], "last": "Sutskever", "suffix": "" }, { "first": "Oriol", "middle": [], "last": "Vinyals", "suffix": "" }, { "first": "Quoc V", "middle": [], "last": "Le", "suffix": "" } ], "year": 2014, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014a. Sequence to sequence learning with neural networks. In Z. Ghahramani, M. Welling, C. Cortes, N. D.", "links": null }, "BIBREF41": { "ref_id": "b41", "title": "Advances in Neural Information Processing Systems", "authors": [ { "first": "K", "middle": [ "Q" ], "last": "Lawrence", "suffix": "" }, { "first": "", "middle": [], "last": "Weinberger", "suffix": "" } ], "year": null, "venue": "", "volume": "27", "issue": "", "pages": "3104--3112", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lawrence, and K. Q. Weinberger, editors, Advances in Neural Information Processing Systems 27, pages 3104-3112. Curran Associates, Inc.", "links": null }, "BIBREF42": { "ref_id": "b42", "title": "Sequence to sequence learning with neural networks", "authors": [ { "first": "Ilya", "middle": [], "last": "Sutskever", "suffix": "" }, { "first": "Oriol", "middle": [], "last": "Vinyals", "suffix": "" }, { "first": "V", "middle": [], "last": "Quoc", "suffix": "" }, { "first": "", "middle": [], "last": "Le", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the 27th International Conference on Neural Information Processing Systems", "volume": "", "issue": "", "pages": "3104--3112", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ilya Sutskever, Oriol Vinyals, and Quoc V. Le. 2014b. Sequence to sequence learning with neural networks. In Proceedings of the 27th International Conference on Neural Information Processing Systems -Vol- ume 2, NIPS'14, pages 3104-3112, Cambridge, MA, USA. MIT Press.", "links": null }, "BIBREF43": { "ref_id": "b43", "title": "Rethinking the inception architecture for computer vision", "authors": [ { "first": "C", "middle": [], "last": "Szegedy", "suffix": "" }, { "first": "V", "middle": [], "last": "Vanhoucke", "suffix": "" }, { "first": "S", "middle": [], "last": "Ioffe", "suffix": "" }, { "first": "J", "middle": [], "last": "Shlens", "suffix": "" }, { "first": "Z", "middle": [], "last": "Wojna", "suffix": "" } ], "year": 2016, "venue": "2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)", "volume": "", "issue": "", "pages": "2818--2826", "other_ids": {}, "num": null, "urls": [], "raw_text": "C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens, and Z. Wojna. 2016. Rethinking the inception architec- ture for computer vision. In 2016 IEEE Confer- ence on Computer Vision and Pattern Recognition (CVPR), pages 2818-2826.", "links": null }, "BIBREF44": { "ref_id": "b44", "title": "Going deeper with convolutions", "authors": [ { "first": "C", "middle": [], "last": "Szegedy", "suffix": "" }, { "first": "Wei", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Yangqing", "middle": [], "last": "Jia", "suffix": "" }, { "first": "P", "middle": [], "last": "Sermanet", "suffix": "" }, { "first": "S", "middle": [], "last": "Reed", "suffix": "" }, { "first": "D", "middle": [], "last": "Anguelov", "suffix": "" }, { "first": "D", "middle": [], "last": "Erhan", "suffix": "" }, { "first": "V", "middle": [], "last": "Vanhoucke", "suffix": "" }, { "first": "A", "middle": [], "last": "Rabinovich", "suffix": "" } ], "year": 2015, "venue": "2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)", "volume": "", "issue": "", "pages": "1--9", "other_ids": {}, "num": null, "urls": [], "raw_text": "C. Szegedy, Wei Liu, Yangqing Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich. 2015. Going deeper with convolu- tions. In 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 1-9.", "links": null }, "BIBREF45": { "ref_id": "b45", "title": "Attention is all you need", "authors": [ { "first": "Ashish", "middle": [], "last": "Vaswani", "suffix": "" }, { "first": "Noam", "middle": [], "last": "Shazeer", "suffix": "" }, { "first": "Niki", "middle": [], "last": "Parmar", "suffix": "" }, { "first": "Jakob", "middle": [], "last": "Uszkoreit", "suffix": "" }, { "first": "Llion", "middle": [], "last": "Jones", "suffix": "" }, { "first": "Aidan", "middle": [ "N" ], "last": "Gomez", "suffix": "" }, { "first": "Illia", "middle": [], "last": "Kaiser", "suffix": "" }, { "first": "", "middle": [], "last": "Polosukhin", "suffix": "" } ], "year": 2017, "venue": "Advances in Neural Information Processing Systems", "volume": "30", "issue": "", "pages": "5998--6008", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, \u0141 ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Gar- nett, editors, Advances in Neural Information Pro- cessing Systems 30, pages 5998-6008. Curran Asso- ciates, Inc.", "links": null }, "BIBREF46": { "ref_id": "b46", "title": "Beyond knowledge distillation: Collaborative learning for bidirectional model assistance", "authors": [ { "first": "Jinzhuo", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Wenmin", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Wen", "middle": [], "last": "Gao", "suffix": "" } ], "year": 2018, "venue": "IEEE Access", "volume": "6", "issue": "", "pages": "39490--39500", "other_ids": { "DOI": [ "10.1109/ACCESS.2018.2854918" ] }, "num": null, "urls": [], "raw_text": "Jinzhuo Wang, Wenmin Wang, and Wen Gao. 2018. Beyond knowledge distillation: Collaborative learn- ing for bidirectional model assistance. IEEE Access, 6:39490-39500.", "links": null }, "BIBREF47": { "ref_id": "b47", "title": "Improving back-translation with uncertainty-based confidence estimation", "authors": [ { "first": "Shuo", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Yang", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Chao", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Huanbo", "middle": [], "last": "Luan", "suffix": "" }, { "first": "Maosong", "middle": [], "last": "Sun", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", "volume": "", "issue": "", "pages": "791--802", "other_ids": { "DOI": [ "10.18653/v1/D19-1073" ] }, "num": null, "urls": [], "raw_text": "Shuo Wang, Yang Liu, Chao Wang, Huanbo Luan, and Maosong Sun. 2019. Improving back-translation with uncertainty-based confidence estimation. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Lan- guage Processing (EMNLP-IJCNLP), pages 791- 802, Hong Kong, China. Association for Computa- tional Linguistics.", "links": null }, "BIBREF48": { "ref_id": "b48", "title": "Disturblabel: Regularizing cnn on the loss layer", "authors": [ { "first": "Lingxi", "middle": [], "last": "Xie", "suffix": "" }, { "first": "Jingdong", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Zhen", "middle": [], "last": "Wei", "suffix": "" }, { "first": "Meng", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Qi", "middle": [], "last": "Tian", "suffix": "" } ], "year": 2016, "venue": "CVPR", "volume": "", "issue": "", "pages": "4753--4762", "other_ids": { "DOI": [ "10.1109/CVPR.2016.514" ] }, "num": null, "urls": [], "raw_text": "Lingxi Xie, Jingdong Wang, Zhen Wei, Meng Wang, and Qi Tian. 2016. Disturblabel: Regularizing cnn on the loss layer. In CVPR, pages 4753-4762.", "links": null }, "BIBREF49": { "ref_id": "b49", "title": "Synchronous bidirectional neural machine translation", "authors": [ { "first": "Long", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "Jiajun", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Chengqing", "middle": [], "last": "Zong", "suffix": "" } ], "year": 2019, "venue": "Transactions of the Association for Computational Linguistics", "volume": "7", "issue": "", "pages": "91--105", "other_ids": { "DOI": [ "10.1162/tacl_a_00256" ] }, "num": null, "urls": [], "raw_text": "Long Zhou, Jiajun Zhang, and Chengqing Zong. 2019. Synchronous bidirectional neural machine transla- tion. Transactions of the Association for Computa- tional Linguistics, 7:91-105.", "links": null }, "BIBREF50": { "ref_id": "b50", "title": "Activation maximization generative adversarial nets", "authors": [ { "first": "Zhiming", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "Han", "middle": [], "last": "Cai", "suffix": "" }, { "first": "Shu", "middle": [], "last": "Rong", "suffix": "" }, { "first": "Yuxuan", "middle": [], "last": "Song", "suffix": "" }, { "first": "Weinan", "middle": [], "last": "Kan Ren", "suffix": "" }, { "first": "Jun", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Yong", "middle": [], "last": "Wang", "suffix": "" }, { "first": "", "middle": [], "last": "Yu", "suffix": "" } ], "year": 2017, "venue": "ICLR", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zhiming Zhou, Han Cai, Shu Rong, Yuxuan Song, Kan Ren, Weinan Zhang, Jun Wang, and Yong Yu. 2017. Activation maximization generative adversarial nets. In ICLR.", "links": null } }, "ref_entries": { "FIGREF0": { "num": null, "uris": null, "type_str": "figure", "text": "One can use lim m \u21920qCFD vto avoid division by zero." }, "FIGREF1": { "num": null, "uris": null, "type_str": "figure", "text": "Graphs of optimalq v w.r.t. m or m . Note the logarithmic scale in horizontal axes, with m \u2208 [0, 1] and m \u2265 0. In order to obtain numerical solutions forq STN" }, "FIGREF2": { "num": null, "uris": null, "type_str": "figure", "text": "Smoothing with RND versus ENT on de-en. m is set to 0.1. The development and test perplexities of the helper LM are 53.8 and 46.5." }, "FIGREF3": { "num": null, "uris": null, "type_str": "figure", "text": "Smoothing different percentages of tokens." }, "FIGREF5": { "num": null, "uris": null, "type_str": "figure", "text": "Discounting different probability masses." }, "FIGREF6": { "num": null, "uris": null, "type_str": "figure", "text": "(b) de-en, MT posterior as rv, m = 0.1 Smoothing with LM and MT posteriors." }, "FIGREF8": { "num": null, "uris": null, "type_str": "figure", "text": "BLEU versus beam size on de-en." }, "FIGREF9": { "num": null, "uris": null, "type_str": "figure", "text": "Relationships between test BLEU and dev metrics. 79 converged es-en models with different label smoothing hyperparameters are scattered." } } } }