{ "paper_id": "Y18-1034", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T13:35:56.601864Z" }, "title": "Reducing Odd Generation from Neural Headline Generation", "authors": [ { "first": "Shun", "middle": [], "last": "Kiyono", "suffix": "", "affiliation": { "laboratory": "", "institution": "Tohoku University", "location": {} }, "email": "kiyono@ecei.tohoku.ac.jp" }, { "first": "Sho", "middle": [], "last": "Takase", "suffix": "", "affiliation": { "laboratory": "NTT Communication Science Laboratories", "institution": "", "location": {} }, "email": "takase.sho@lab.ntt.co.jp" }, { "first": "Jun", "middle": [], "last": "Suzuki", "suffix": "", "affiliation": { "laboratory": "", "institution": "Tohoku University", "location": {} }, "email": "jun.suzuki@ecei.tohoku.ac.jp" }, { "first": "Naoaki", "middle": [], "last": "Okazaki", "suffix": "", "affiliation": { "laboratory": "", "institution": "Tokyo Institute of Technology", "location": {} }, "email": "okazaki@c.titech.ac.jp" }, { "first": "Kentaro", "middle": [], "last": "Inui", "suffix": "", "affiliation": { "laboratory": "", "institution": "Tohoku University", "location": {} }, "email": "inui@ecei.tohoku.ac.jp" }, { "first": "Masaaki", "middle": [], "last": "Nagata", "suffix": "", "affiliation": { "laboratory": "NTT Communication Science Laboratories", "institution": "", "location": {} }, "email": "nagata.masaaki@lab.ntt.co.jp" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "The Encoder-Decoder model is widely used in natural language generation tasks. However, the model sometimes suffers from repeated redundant generation, misses important phrases, and includes irrelevant entities. Toward solving these problems we propose a novel sourceside token prediction module. Our method jointly estimates the probability distributions over source and target vocabularies to capture the correspondence between source and target tokens. Experiments show that the proposed model outperforms the current state-of-the-art method in the headline generation task. We also show that our method can learn a reasonable token-wise correspondence without knowing any true alignment 1. * This work is a product of collaborative research program of Tohoku University and NTT Communication Science Laboratories. 1 Our code for reproducing the experiments is available at https://github.com/butsugiri/UAM Unfortunately, as often discussed in the community, EncDec sometimes generates sentences with repeating phrases or completely irrelevant phrases and the reason for their generation cannot be interpreted intuitively. Moreover, EncDec also sometimes generates sentences that lack important phrases. We refer to these observations as the odd generation problem (odd-gen) in EncDec. The following table shows typical examples of odd-gen actually generated by a typical EncDec. (1) Repeating Phrases Gold: duran duran group fashionable again EncDec: duran duran duran duran (2) Lack of Important Phrases Gold: graf says goodbye to tennis due to injuries EncDec: graf retires (3) Irrelevant Phrases Gold: u.s. troops take first position in serb-held bosnia EncDec: precede sarajevo This paper tackles for reducing the odd-gen in the task of abstractive summarization. In machine translation literature, coverage (Tu et al., 2016; Mi et al., 2016) and reconstruction (Tu et al., 2017) are promising extensions of EncDec to address the odd-gen. These models take advantage of the fact that machine translation is the loss-less generation (lossless-gen) task, where the semantic information of source-and target-side sequence is equivalent. However, as discussed in previous studies, abstractive summarization is a lossy-compression generation (lossy-gen) task. Here, the task is to delete certain semantic information from the source to generate target-side sequence.", "pdf_parse": { "paper_id": "Y18-1034", "_pdf_hash": "", "abstract": [ { "text": "The Encoder-Decoder model is widely used in natural language generation tasks. However, the model sometimes suffers from repeated redundant generation, misses important phrases, and includes irrelevant entities. Toward solving these problems we propose a novel sourceside token prediction module. Our method jointly estimates the probability distributions over source and target vocabularies to capture the correspondence between source and target tokens. Experiments show that the proposed model outperforms the current state-of-the-art method in the headline generation task. We also show that our method can learn a reasonable token-wise correspondence without knowing any true alignment 1. * This work is a product of collaborative research program of Tohoku University and NTT Communication Science Laboratories. 1 Our code for reproducing the experiments is available at https://github.com/butsugiri/UAM Unfortunately, as often discussed in the community, EncDec sometimes generates sentences with repeating phrases or completely irrelevant phrases and the reason for their generation cannot be interpreted intuitively. Moreover, EncDec also sometimes generates sentences that lack important phrases. We refer to these observations as the odd generation problem (odd-gen) in EncDec. The following table shows typical examples of odd-gen actually generated by a typical EncDec. (1) Repeating Phrases Gold: duran duran group fashionable again EncDec: duran duran duran duran (2) Lack of Important Phrases Gold: graf says goodbye to tennis due to injuries EncDec: graf retires (3) Irrelevant Phrases Gold: u.s. troops take first position in serb-held bosnia EncDec: precede sarajevo This paper tackles for reducing the odd-gen in the task of abstractive summarization. In machine translation literature, coverage (Tu et al., 2016; Mi et al., 2016) and reconstruction (Tu et al., 2017) are promising extensions of EncDec to address the odd-gen. These models take advantage of the fact that machine translation is the loss-less generation (lossless-gen) task, where the semantic information of source-and target-side sequence is equivalent. However, as discussed in previous studies, abstractive summarization is a lossy-compression generation (lossy-gen) task. Here, the task is to delete certain semantic information from the source to generate target-side sequence.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "The Encoder-Decoder model with the attention mechanism (EncDec) (Sutskever et al., 2014; Cho et al., 2014; Luong et al., 2015) has been an epoch-making development that has led to great progress being made in many natural language generation tasks, such as machine translation , dialog generation (Shang et al., 2015) , and headline generation (Rush et al., 2015) . Today, EncDec and its variants are widely used as the predominant baseline method in these tasks.", "cite_spans": [ { "start": 64, "end": 88, "text": "(Sutskever et al., 2014;", "ref_id": null }, { "start": 89, "end": 106, "text": "Cho et al., 2014;", "ref_id": "BIBREF1" }, { "start": 107, "end": 126, "text": "Luong et al., 2015)", "ref_id": "BIBREF2" }, { "start": 297, "end": 317, "text": "(Shang et al., 2015)", "ref_id": null }, { "start": 344, "end": 363, "text": "(Rush et al., 2015)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Therefore the models such as the coverage and the reconstruction cannot work appropriately on abstractive summarization.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Recently, Zhou et al. (2017) proposed incorporating an additional gate for selecting an appropriate set of words from given source sentence. Moreover, Suzuki and Nagata (2017) introduced a module for estimating the upper-bound frequency of the target vocabulary given a source sentence. These methods essentially address individual parts of the odd-gen in lossy-gen tasks.", "cite_spans": [ { "start": 10, "end": 28, "text": "Zhou et al. (2017)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In contrast to the previous studies, we propose a novel approach that addresses the entire odd-gen in lossy-gen tasks. The basic idea underlying our method is to add an auxiliary module to EncDec for modeling the token-wise correspondence of the source and target, which includes drops of sourceside tokens. We refer to our additional module as the Source-side Prediction Module (SPM). We add the SPM to the decoder output layer to directly estimate the correspondence during the training of EncDec.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We conduct experiments on a widely-used headline generation dataset (Rush et al., 2015) and evaluate the effectiveness of the proposed method. We show that the proposed method outperforms the current state-of-the-art method on this dataset. We also show that our method is able to learn a reasonable token-wise correspondence without knowing any true alignment, which may help reduce the odd-gen.", "cite_spans": [ { "start": 68, "end": 87, "text": "(Rush et al., 2015)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We address the headline generation task introduced in Rush et al. (2015) , which is a typical lossy-gen task. The source (input) is the first sentence of a news article, and the target (output) is the headline of the article. We say I and J represent the numbers of tokens in the source and target, respectively. An important assumption of the headline generation (lossy-gen) task is that the relation I > J always holds, namely, the target must be shorter than the source. This implies that we need to optimally select salient concepts included in given source sentence. This selection indeed increases a difficulty of the headline generation for EncDec.", "cite_spans": [ { "start": 66, "end": 72, "text": "(2015)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Lossy-compression Generation", "sec_num": "2" }, { "text": "Note that it is an essentially difficult problem for EncDec to learn an appropriate paraphrasing of each concept in the source, which can be a main reason for irrelevant generation. In addition, EncDec also needs to manage the selection of concepts in the source; e.g., discarding an excessive number of concepts from the source would yield a headline that was too short, and utilizing the same concept multiple times may lead a redundant headline.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Lossy-compression Generation", "sec_num": "2" }, { "text": "3 Encoder-Decoder Model with Attention Mechanism (EncDec)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Lossy-compression Generation", "sec_num": "2" }, { "text": "This section briefly describes EncDec as the baseline model of our method 2 . To explain EncDec concisely, let us consider that the input of EncDec is a sequence of one-hot vectors X obtained from the given source-side sentence. Let x i \u2208 {0, 1} Vs represent the one-hot vector of the i-th token in X, where V s represents the number of instances (tokens) in the source-side vocabulary V s . We introduce x 1:I to represent (x 1 , . . . , x I ) by a short notation, namely, X = x 1:I . Similarly, let y j \u2208 {0, 1} Vt represent the one-hot vector of the j-th token in target-side sequence Y , where V t is the number of instances (tokens) in the target-side vocabulary V t . Here, we define Y as always containing two additional onehot vectors of special tokens bos for y 0 and eos for y J+1 . Thus, Y = y 0:J+1 ; its length is always J + 2. Then, EncDec models the following conditional probability:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Lossy-compression Generation", "sec_num": "2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "p(Y |X) = J+1 j=1 p(y j |y 0:j\u22121 , X).", "eq_num": "(1)" } ], "section": "Lossy-compression Generation", "sec_num": "2" }, { "text": "EncDec encodes source one-hot vector sequence x 1:I , and generates the hidden state sequence h 1:I , where h i \u2208 R H for all i, and H is the size of the hidden state. Then, the decoder with the attention mechanism computes the vector z j \u2208 R H at every decoding time step j as:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Lossy-compression Generation", "sec_num": "2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "z j = AttnDec(y j\u22121 , h 1:I ).", "eq_num": "(2)" } ], "section": "Lossy-compression Generation", "sec_num": "2" }, { "text": "We apply RNN cells to both the encoder and decoder. Then, EncDec generates a target-side token based on the probability distribution o j \u2208 R Vt as:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Lossy-compression Generation", "sec_num": "2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "o j = softmax(W o z j + b o ),", "eq_num": "(3)" } ], "section": "Lossy-compression Generation", "sec_num": "2" }, { "text": "where W o \u2208 R Vt\u00d7H is a parameter matrix and b o \u2208 R Vt is a bias term 3 . 32nd Pacific Asia Conference on Language, Information and Computation Hong Kong, 1-3 December 2018", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Lossy-compression Generation", "sec_num": "2" }, { "text": "Copyright 2018 by the authors The SPM predicts the probability distribution over the source vocabulary q j at each time step j. After predicting all the time steps, the SPM compares the sum of the predictionsq with the sum of the sourceside tokensx as an objective function src .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Lossy-compression Generation", "sec_num": "2" }, { "text": "To train EncDec, let D be training data for headline generation that consists of source-headline sentence pairs. Let \u03b8 represent all parameters in EncDec. Our goal is to find the optimal parameter set\u03b8 that minimizes the following objective function G 1 (\u03b8) for the given training data D:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Lossy-compression Generation", "sec_num": "2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "G 1 (\u03b8) = 1 |D| (X,Y )\u2208D trg (Y , X, \u03b8), trg (Y , X, \u03b8) = \u2212 log p(Y |X, \u03b8) .", "eq_num": "(4)" } ], "section": "Lossy-compression Generation", "sec_num": "2" }, { "text": "Since o j for each j is a vector representation of the probabilities of p(\u0177|y 0:j\u22121 , X, \u03b8) over the target vocabularies\u0177 \u2208 V t , we can calculate trg as:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Lossy-compression Generation", "sec_num": "2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "trg (Y , X, \u03b8) = \u2212 J+1 j=1 y j \u2022 log o j .", "eq_num": "(5)" } ], "section": "Lossy-compression Generation", "sec_num": "2" }, { "text": "In the inference step, we use the trained parameters to search for the best target sequence. We use beam search to find the target sequence that maximizes the product of the conditional probabilities as described in Equation 1.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Lossy-compression Generation", "sec_num": "2" }, { "text": "In Section 2, we assumed that the selection of concepts in the source is an essential part of the odd-gen. Thus, our basic idea is to extend EncDec that can manage the status of concept utilization during headline generation. More precisely, instead of directly managing concepts since they are not well-defined, We consider to model token-wise correspondence of the source and target, including the information of source-side tokens that cannot be aligned to any target-side tokens. Figure 1 overviews the proposed method, SPM. During the training process of EncDec, the decoder estimates the probability distribution over source-side vocabulary, which is q j \u2208 R Vs , in addition to that of the target-side vocabulary, o j \u2208 R Vt , for each time step j. Note that the decoder continues to estimate the distributions up to source sequence length I regardless of target sequence length J. Here, we introduce a special token pad in the target-side vocabulary, and assume that pad is repeatedly generated after finishing the generation of all target-side tokens as correct target tokens. This means that we always assume that the numbers of tokens in the source and target is the same, and thus, our method allows to put one-to-one correspondence into practice in the lossygen task. In this way, EncDec can directly model token-wise correspondence of source-and target-side tokens on the decoder output layer, which includes the information of unaligned source-side tokens by alignment to pad .", "cite_spans": [], "ref_spans": [ { "start": 484, "end": 492, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Proposed Method: Source Prediction Module (SPM)", "sec_num": "4" }, { "text": "Unfortunately, standard headline generation datasets have no information of true one-to-one alignments between source-and target-side tokens. Thus, we develop a novel method for training token-wise correspondence model that takes unsupervised learning approach. Specifically, we minimize sentencelevel loss instead of token-wise alignment loss. We describe the details in the following sections.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proposed Method: Source Prediction Module (SPM)", "sec_num": "4" }, { "text": "In Figure 1 , the module inside the dashed line represents the SPM. First, the SPM calculates a probability distribution over the source vocabulary q j \u2208 R Vs at each time step j in the decoding process by using the following equation:", "cite_spans": [], "ref_spans": [ { "start": 3, "end": 11, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Model Definition", "sec_num": "4.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "q j = softmax(W q z j + b q ),", "eq_num": "(6)" } ], "section": "Model Definition", "sec_num": "4.1" }, { "text": "where W q \u2208 R Vs\u00d7H is a parameter matrix like W o in Equation 3, and b q \u2208 R Vs is a bias term. As described in Section 3, EncDec calculates a probability distribution over the target vocabulary o j from z j . Therefore, EncDec with the SPM jointly estimates the probability distributions over the source and target vocabularies from the same vector z j . Next, we define Y as a concatenation of the onehot vectors of the target-side sequence Y and those of the special token pad of length I \u2212(J +1). Here, y J+1 is a one-hot vector of eos , and y j for each j \u2208 {J + 2, . . . , I} is a one-hot vector of pad . We define Y = Y if and only if J + 1 = I. Note that the length of Y is always no shorter than that of Y , that is, |Y | \u2265 |Y | since headline generation always assumes I > J as described in Section 2.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model Definition", "sec_num": "4.1" }, { "text": "Letx andq be the sum of all one-hot vectors in source sequence x 1:I and all prediction of the SPM q 1:I , respectively; that is,x = I i=1 x i and q = I j=1 q j . Note thatx is a vector representation of the occurrence (or bag-of-words representation) of each source-side vocabulary appearing in the given source sequence.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model Definition", "sec_num": "4.1" }, { "text": "EncDec with the SPM models the following conditional probability:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model Definition", "sec_num": "4.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "p(Y ,x|X) = p(x|Y , X)p(Y |X).", "eq_num": "(7)" } ], "section": "Model Definition", "sec_num": "4.1" }, { "text": "We define p(Y |X) as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model Definition", "sec_num": "4.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "p(Y |X) = I j=1 p(y j |y 0:j\u22121 , X),", "eq_num": "(8)" } ], "section": "Model Definition", "sec_num": "4.1" }, { "text": "which is identical to p(Y |X) in Equation 1 except for substituting I for J to model the probabilities of pad that appear from j = I \u2212 (J + 1) to j = I. Next, we define p(x|Y , X) as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model Definition", "sec_num": "4.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "p(x|Y , X) = 1 Z exp \u2212 q \u2212x 2 2 C ,", "eq_num": "(9)" } ], "section": "Model Definition", "sec_num": "4.1" }, { "text": "where Z is a normalization term, and C is a hyperparameter that controls the sensitivity of the distribution.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model Definition", "sec_num": "4.1" }, { "text": "Training Let \u03b3 represent the parameter set of SPM. Then, we define the loss function for SPM as", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training and Inference of SPM", "sec_num": "4.2" }, { "text": "src (x, X, Y , \u03b3, \u03b8) = \u2212 log p(x|Y , X, \u03b3, \u03b8) .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training and Inference of SPM", "sec_num": "4.2" }, { "text": "From Equation 9, we can derive src as", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training and Inference of SPM", "sec_num": "4.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "src (x, X, Y , \u03b3, \u03b8) = 1 C q \u2212x 2 2 + log(Z).", "eq_num": "(10)" } ], "section": "Training and Inference of SPM", "sec_num": "4.2" }, { "text": "We can discard the second term on the RHS, that is log(Z), since this is independent of \u03b3 and \u03b8.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training and Inference of SPM", "sec_num": "4.2" }, { "text": "Here, we regard the sum of trg and src as an objective loss function of multi-task training. Formally, we train the SPM with EncDec by minimizing the following objective function G 2 :", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training and Inference of SPM", "sec_num": "4.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "G 2 (\u03b8, \u03b3) = 1 |D| (X,Y )\u2208D trg (Y , X, \u03b8) + src (x, X, Y , \u03b3, \u03b8)", "eq_num": "(11)" } ], "section": "Training and Inference of SPM", "sec_num": "4.2" }, { "text": "Inference In the inference time, the goal is only to search for the best target sequence. Thus, we do not need to compute SPM during the inference. Similarly, it is also unnecessary to produce pad after generating eos . Thus, the actual computation cost of our method for the standard evaluation is exactly the same as that of the base EncDec.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training and Inference of SPM", "sec_num": "4.2" }, { "text": "Dataset The origin of the headline generation dataset used in our experiments is identical to that used in Rush et al. 2015, namely, the dataset consists of pairs of the first sentence of each article and its headline from the annotated English Gigaword corpus (Napoles et al., 2012). We slightly changed the data preparation procedure to achieve a more realistic and reasonable evaluation since the widely-used provided evaluation dataset already contains unk , which is a replacement of all low frequency words. This is because the data preprocessing script provided by Rush et al. (2015) 4 automatically replaces low frequency words with unk 5 . To penalize unk in system outputs during the evaluation, we removed unk replacement procedure from the preprocessing script. We believe this is more realistic evaluation setting.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Settings", "sec_num": "5.1" }, { "text": "Rush et al. (2015) defined the training, validation and test split, which contain approximately 3.8M, 200K and 400K source-headline pairs, respectively. We used the entire training split for training as in 4 https://github.com/facebookarchive/ NAMAS.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Settings", "sec_num": "5.1" }, { "text": "5 In a personal communication with the first author of Zhou et al. 2017, we found that their model decodes unk in the same form as it appears in the test set, and unk had a positive effect on the final performance of the model. the previous studies. We randomly sampled test data and validation data from the validation split since we found that the test split contains many noisy instances. Finally, our validation and test data consist of 8K and 10K source-headline pairs, respectively. Note that they are relatively large compared with the previously used datasets, and they do not contain unk .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Settings", "sec_num": "5.1" }, { "text": "We also evaluated our experiments on the test data used in the previous studies. To the best of our knowledge, two test sets from the Gigaword are publicly available by Rush et al. (2015) 6and Zhou et al. (2017) 7. Here, both test sets contain unk 8 . Evaluation Metric We evaluated the performance by ROUGE-1 (RG-1), ROUGE-2 (RG-2) and ROUGE-L (RG-L) 9 . We report the F1 value as given in a previous study 10 . We computed the scores with the official ROUGE script (version 1.5.5).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Settings", "sec_num": "5.1" }, { "text": "Comparative Methods To investigate the effectiveness of the SPM, we evaluate the performance of the EncDec with the SPM. In addition, we investigate whether the SPM improves the performance of the state-of-the-art method: EncDec+sGate. Thus, we compare the following methods on the same training setting.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Settings", "sec_num": "5.1" }, { "text": "EncDec This is the implementation of the base model explained in Section 3.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Settings", "sec_num": "5.1" }, { "text": "EncDec+sGate To reproduce the state-of-the-art method proposed by Zhou et al. (2017), we combined our re-implemented selective gate (sGate) with the encoder of EncDec.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Settings", "sec_num": "5.1" }, { "text": "EncDec+SPM We combined the SPM with the EncDec as explained in Section 4.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Settings", "sec_num": "5.1" }, { "text": "EncDec+sGate+SPM This is the combination of the SPM with the EncDec+sGate. Implementation Details Table 1 summarizes hyper-parameters and model configurations. We selected the settings commonly-used in the previous studies, e.g., (Rush et al., 2015; Nallapati et al., 2016; Suzuki and Nagata, 2017) .", "cite_spans": [ { "start": 230, "end": 249, "text": "(Rush et al., 2015;", "ref_id": "BIBREF2" }, { "start": 250, "end": 273, "text": "Nallapati et al., 2016;", "ref_id": "BIBREF2" }, { "start": 274, "end": 298, "text": "Suzuki and Nagata, 2017)", "ref_id": "BIBREF2" } ], "ref_spans": [ { "start": 98, "end": 116, "text": "Table 1 summarizes", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Settings", "sec_num": "5.1" }, { "text": "We constructed the vocabulary set using Byte-Pair- et al., 2016) to handle low frequency words, as it is now a common practice in neural machine translation. The BPE merge operations are jointly learned from the source and the target. We set the number of the BPE merge operations at 5, 000. We used the same vocabulary set for both the source V s and the target V t . Table 2 shows the results of previous methods. They often obtained higher ROUGE scores than our models in Gigaword Test (Rush) and Gigaword Test (Zhou). However, this does not immediately imply that our method is inferior to the previous methods. This result is basically derived from the inconsistency of the vocabulary. In detail, our training data does not contain unk because we adopted the BPE to construct the vocabulary. Thus, our experimental setting is severer than that of previous studies with the presence of unk in the datasets of Gigaword Test (Rush) and Gigaword Test (Zhou). This is also demonstrated by the fact that EncDec+sGate obtained a lower score than those reported in the paper of SEASS, which has the same model architecture as EncDec+sGate.", "cite_spans": [ { "start": 51, "end": 64, "text": "et al., 2016)", "ref_id": null } ], "ref_spans": [ { "start": 369, "end": 376, "text": "Table 2", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Settings", "sec_num": "5.1" }, { "text": "The motivation of the SPM is to suppress odd-gen by enabling a one-to-one correspondence between the source and the target. Thus, in this section, we investigate whether the SPM reduces odd-gen in comparison to EncDec.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "6" }, { "text": "For a useful quantitative analysis, we should compute the statistics of generated sentences containing oddgen. However, it is hard to detect odd-gen correctly. Instead, we determine a pseudo count of each type of odd-gen as follows.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Does SPM Reduce odd-gen?", "sec_num": "6.1" }, { "text": "We assume that a model causes repeating phrases if the model outputs the same token more than once. Therefore, we compute the frequency of tokens that occur more than once in the generated headlines. However, some phrases might occur more than once in the gold data. To address this case, we subtract the frequency of tokens in the reference headline from the above calculation result. The result of this subtraction is taken to be the number of repeating phrases in each generated headline.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Repeating phrases", "sec_num": null }, { "text": "We assume generated headline which is shorter than the gold omits one or more important phrase. Thus, we compute the difference in gold headline length and the generated headline length. Irrelevant phrases We consider that improvements in ROUGE scores indicate a reduction in irrelevant phrases because we believe that ROUGE penalizes irrelevant phrases. Figure 2 shows the number of repeating phrases and lack of important phrases in Gigaword Test (Ours). This figure indicates that EncDec+SPM reduces the odd-gen in comparison to EncDec. Thus, we consider the SPM reduced odd-gen. Figure 3 shows sampled headlines actually generated by EncDec and EncDec+SPM. It is clear that the outputs of EncDec contain odd-gen while those of the EncDec+SPM do not. These examples also demonstrate that SPM successfully reduces odd-gen.", "cite_spans": [], "ref_spans": [ { "start": 355, "end": 363, "text": "Figure 2", "ref_id": "FIGREF2" }, { "start": 583, "end": 592, "text": "Figure 3", "ref_id": "FIGREF3" } ], "eq_spans": [], "section": "Lack of important phrases", "sec_num": null }, { "text": "We visualize the prediction of the SPM and the attention distribution to see the acquired token-wise correspondence between the source and the target.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Visualizing SPM and Attention", "sec_num": "6.2" }, { "text": "Specifically, we feed the source-target pair (X, Y ) to EncDec and EncDec+SPM, and then collect the source-side prediction (q 1:I ) of EncDec+SPM and the attention distribution (\u03b1 1:J ) of EncDec 12 . For source-side prediction, we extracted the probability of each token x i \u2208 X from q j , j \u2208 {1, . . . , I}. Figure 4 shows an example of the heat map 13 . We used Gigaword Test (Ours) as the input. The brackets in the y-axis represent the source-side tokens that are aligned with target-side tokens. We selected the aligned tokens in the following manner: For the attention (Figure 4a ), we select the token with the largest attention value. For the SPM (Figure 4b ), we select the token with the largest probability over the whole vocabulary V s . Figure 4a indicates that most of the attention distribution is concentrated at the end of the sentence. As a result, attention provides poor token-wise correspondence between the source and the target. For example, target-side tokens \"tokyo\" and \"end\" are both aligned with the source-side sentence period. In contrast, Figure 4b shows that the SPM captures the correspondence between the source and the target. The source sequence \"tokyo stocks closed higher\" is successfully aligned with the target \"tokyo stocks end higher\". Moreover, the SPM aligned unimportant to-32nd Pacific Asia Conference on Language, Information and Computation Hong Kong, 1-3 December 2018", "cite_spans": [], "ref_spans": [ { "start": 311, "end": 319, "text": "Figure 4", "ref_id": null }, { "start": 577, "end": 587, "text": "(Figure 4a", "ref_id": null }, { "start": 657, "end": 667, "text": "(Figure 4b", "ref_id": null }, { "start": 752, "end": 761, "text": "Figure 4a", "ref_id": null }, { "start": 1072, "end": 1081, "text": "Figure 4b", "ref_id": null } ], "eq_spans": [], "section": "Visualizing SPM and Attention", "sec_num": "6.2" }, { "text": "Copyright 2018 by the authors The x-axis and y-axis of the figure correspond to the source and the target sequence respectively. Tokens in the brackets represent source-side tokens that are aligned with target-side tokens at that time step. kens for the headline such as \"straight\" and \"tuesday\" with pad tokens. Thus, this example suggests that the SPM achieves better token-wise correspondence than attention. It is noteworthy that the SPM captured a one-to-one correspondence even though we trained it without correct alignment information.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Visualizing SPM and Attention", "sec_num": "6.2" }, { "text": "In the field of neural machine translation, several methods have been proposed to solve the odd-gen. The coverage model (Mi et al., 2016; Tu et al., 2016) forces the decoder to attend to every part of the source sequence to translate all semantic information in the source. The reconstructor (Tu et al., 2017 ) trains the translation model from the target to the source. Moreover, Weng et al. (2017) proposed a method to predict the untranslated words from the decoder at each time step. These methods aim to convert all contents in the source into the target, since machine translation is a lossless-gen task. In contrast, our proposal, SPM, models both paraphrasing and discarding to reduce the odd-gen in the lossy-gen task. We focused on headline generation which is a wellknown lossy-gen task. Recent studies have actively applied the EncDec to this task (Rush et al., 2015; Chopra et al., 2016; Nallapati et al., 2016) . For the headline generation task, Zhou et al. 2017and Suzuki and Nagata (2017) tackled a part of the oddgen. Zhou et al. (2017) incorporated an additional gate (sGate) into the encoder to select appropriate words from the source. Suzuki and Nagata 2017proposed a frequency estimation module to reduce the repeating phrases. Our motivation is similar to theirs, but our goal is to solve all odd-gen components. In addition, we can combine these approaches with the proposed method. In fact, we showed in Section 5.2 that the SPM can improve the performance of sGate with EncDec.", "cite_spans": [ { "start": 120, "end": 137, "text": "(Mi et al., 2016;", "ref_id": "BIBREF2" }, { "start": 138, "end": 154, "text": "Tu et al., 2016)", "ref_id": null }, { "start": 292, "end": 308, "text": "(Tu et al., 2017", "ref_id": "BIBREF3" }, { "start": 381, "end": 399, "text": "Weng et al. (2017)", "ref_id": "BIBREF4" }, { "start": 860, "end": 879, "text": "(Rush et al., 2015;", "ref_id": "BIBREF2" }, { "start": 880, "end": 900, "text": "Chopra et al., 2016;", "ref_id": "BIBREF2" }, { "start": 901, "end": 924, "text": "Nallapati et al., 2016)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "7" }, { "text": "Apart from tackling odd-gen, some studies proposed methods to improve the performance of the headline generation task. Takase et al. 2016incorporated AMR (Banarescu et al., 2013) into the encoder to use the syntactic and semantic information of the source. Nallapati et al. 2016also encoded additional information of the source such as TF-IDF and named entities. modeled the typical structure of a headline, such as \"Who Action What\" with a variational auto-encoder. These approaches improve the performance of headline generation, but it is unclear that they can reduce odd-gen.", "cite_spans": [ { "start": 154, "end": 178, "text": "(Banarescu et al., 2013)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "7" }, { "text": "In this paper, we introduced an approach for reducing the odd-gen in the lossy-gen task. The proposal, SPM, learns to predict the one-to-one correspondence of tokens in the source and the target. Experiments on the headline generation task showed that the SPM improved the performance of typical EncDec, and outperformed the current state-of-the-art model. Furthermore, we demonstrated that the SPM reduced the odd-gen. In addition, SPM obtained token-wise correspondence between the source and the target without any alignment data. First, the model computes the attention distribution \u03b1 j \u2208 R I from the decoder hidden state z j and encoder hidden states (h 1 , . . . , h I ). From among three attention scoring functions proposed in Luong et al. (2015), we employ general function. This function calculates the attention score in bilinear form. Specifically, the attention score between the i-th source hidden state and the j-th decoder hidden state is computed by the following equation:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "8" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "\u03b1 j [i] = exp(h i W \u03b1 z j ) I i=1 exp(h i W \u03b1 z j )", "eq_num": "(17)" } ], "section": "Conclusion", "sec_num": "8" }, { "text": "where W \u03b1 \u2208 R H\u00d7H is a parameter matrix, and \u03b1 j [i] denotes i-th element of \u03b1 j . \u03b1 j is then used for collecting the source-side information that is relevant for predicting the target token. This is done by taking the weighted sum on the encoder hidden states:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "8" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "c j = I i=1 \u03b1 j [i]h i", "eq_num": "(18)" } ], "section": "Conclusion", "sec_num": "8" }, { "text": "Finally, the source-side information is mixed with the decoder hidden state to derive final hidden state z j . Concretely, the context vector c j is concatenated with z j to form vector u j \u2208 R 2H . u j is then fed into a single fully-connected layer with tanh nonlinearity:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "8" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "z j = tanh(W s u j )", "eq_num": "(19)" } ], "section": "Conclusion", "sec_num": "8" }, { "text": "where W s \u2208 R H\u00d72H is a parameter matrix. Table 3 summarizes the characteristics of each dataset used in our experiments.", "cite_spans": [], "ref_spans": [ { "start": 42, "end": 49, "text": "Table 3", "ref_id": "TABREF5" } ], "eq_spans": [], "section": "Conclusion", "sec_num": "8" }, { "text": "Figures 5, 6 and 7 are additional visualizations of SPM and attention. We created each figure using the procedure described in Section 6.2.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "D Dataset Summary", "sec_num": null }, { "text": "We analyzed source-side prediction to investigate the alignment acquired by SPM. We randomly sampled 500 source-target pairs from Gigaword Test (Ours), and fed them to EncDec+SPM. For each decoding time step j, we created the alignment pair by comparing the target-side token y j with the token with the highest probability over the source-side probability distribution q j . Figure 5 : Although \"london\" is not at the beginning of the source sentence, the SPM aligns \"london\" in the source and the target. On the other hand, EncDec concentrates most of the attention at the end of the sentence. As a result, most of the target-side tokens are aligned with the sentence period of the source sentence.", "cite_spans": [], "ref_spans": [ { "start": 376, "end": 384, "text": "Figure 5", "ref_id": null } ], "eq_spans": [], "section": "F Obtained Alignments", "sec_num": null }, { "text": "Aligned Pairs: (Target-side Token, SPM Prediction)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Type", "sec_num": null }, { "text": "Verb Inflection (calls, called), (release, released), (win, won), (condemns, condemned), (rejects, rejected), (warns, warned)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Type", "sec_num": null }, { "text": "Paraphrasing to Shorter Form (rules, agreement), (ends, closed), (keep, continued), (sell, issue), (quake, earthquake), (eu, european) Others (tourists, people), (dead, killed), (dead, died), (administration, bush), (aircraft, planes), (militants, group) Figure 7 : The SPM aligns \"welcomes\" with \"welcomed.\" On the other hand, EncDec aligns \"welcomes\" with the sentence period.", "cite_spans": [], "ref_spans": [ { "start": 255, "end": 263, "text": "Figure 7", "ref_id": null } ], "eq_spans": [], "section": "Type", "sec_num": null }, { "text": "Our model configuration follows EncDec described inLuong et al. (2015).3 For more detailed definitions of the encoder, decoder, and attention mechanism, see Appendices A, B and C.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "32nd Pacific Asia Conference on Language, Information and Computation Hong Kong, 1-3 December 2018Copyright 2018 by the authors", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "https://github.com/harvardnlp/ sent-summary 7 https://res.qyzhou.me8 We summarize the details of the dataset in Appendix D9 We restored sub-words to the standard token split for the evaluation.10 ROUGE script option is: \"-n2 -m -w 1.2\"", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "For details of the attention mechanism, see Appendix C.13 For more visualizations, see Appendix E", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "32nd Pacific Asia Conference on Language, Information and Computation Hong Kong, 1-3 December 2018Copyright 2018 by the authors", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "We are grateful to anonymous reviewers for their insightful comments. We thank Sosuke Kobayashi for providing helpful comments. We also thank Qingyu Zhou for providing a dataset and information for a fair comparison.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgment", "sec_num": null }, { "text": "We employ bidirectional RNN (BiRNN) as the encoder of the baseline model. BiRNN is composed of two separate RNNs for forward ( \u2212 \u2212\u2212 \u2192 RNN src ) and backward ( \u2190 \u2212\u2212 \u2212 RNN src ) directions. The forward RNN reads the source sequence X from left to right order and constructs hidden states ( h 1 , . . . , h I ). Similarly, the backward RNN reads the input in the reverse order to obtain another sequence of hidden states ( h 1 , . . . , h I ). Finally, we take a summation of the hidden states of each direction to construct the final representation of the source sequence (h 1 , . . . , h I ).Concretely, for given time step i, the representation h i is constructed as follows:where E s \u2208 R D\u00d7Vs denotes the word embedding matrix of the source-side, and D denotes the size of word embedding.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A Baseline Model Encoder", "sec_num": null }, { "text": "The baseline model AttnDec is composed of the decoder and the attention mechanism. Here, the decoder is the unidirectional RNN with the inputfeeding approach (Luong et al., 2015). Concretely, decoder RNN takes the output of the previous time step y j\u22121 , decoder hidden state z j\u22121 and final hidden state z j\u22121 and derives the hidden state of current time step z j :where E t \u2208 R D\u00d7Vt denotes the word embedding matrix of the decoder. Here, z 0 is defined as a zero vector.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "B Baseline Model Decoder", "sec_num": null }, { "text": "The attention architecture of the baseline model is the same as the Global Attention model proposed by Luong et al. (2015) . Attention is responsible for constructing the final hidden state z j from the decoder hidden state z j and encoder hidden states (h 1 , . . . , h I ). ", "cite_spans": [ { "start": 103, "end": 122, "text": "Luong et al. (2015)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "C Baseline Model Attention Mechanism", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Neural Machine Translation by Jointly Learning to Align and Translate", "authors": [ { "first": "[", "middle": [], "last": "References", "suffix": "" }, { "first": "", "middle": [], "last": "Bahdanau", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 3rd International Conference on Learning Representations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "References [Bahdanau et al.2015] Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural Machine Trans- lation by Jointly Learning to Align and Translate. In Proceedings of the 3rd International Conference on Learning Representations (ICLR 2015).", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Learning Phrase Representations using RNN Encoder-Decoder for Statistical Machine Translation", "authors": [ { "first": "[", "middle": [], "last": "Banarescu", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics (NAACL 2016)", "volume": "9", "issue": "", "pages": "1735--1780", "other_ids": {}, "num": null, "urls": [], "raw_text": "[Banarescu et al.2013] Laura Banarescu, Claire Bonial, Shu Cai, Madalina Georgescu, Kira Griffitt, Ulf Her- mjakob, Kevin Knight, Philipp Koehn, Martha Palmer, and Nathan Schneider. 2013. Abstract Meaning Rep- resentation for Sembanking. In Proceedings of the 7th Linguistic Annotation Workshop and Interoperability with Discourse, pages 178-186. [Cho et al.2014] Kyunghyun Cho, Bart van Merrienboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning Phrase Representations using RNN Encoder-Decoder for Statistical Machine Translation. In Proceedings of the 2014 Conference on Empirical Methods in Natu- ral Language Processing (EMNLP 2014), pages 1724- 1734. [Chopra et al.2016] Sumit Chopra, Michael Auli, and Alexander M. Rush. 2016. Abstractive Sentence Sum- marization with Attentive Recurrent Neural Networks. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics (NAACL 2016), pages 93-98. [Hochreiter and Schmidhuber1997] Sepp Hochreiter and J\u00fcrgen Schmidhuber. 1997. Long Short-Term Memory. Neural Computation, 9(8):1735-1780. [Kingma and Ba2015] Diederik Kingma and Jimmy Ba. 2015. Adam: A Method for Stochastic Optimization. In Proceedings of the 3rd International Conference on Learning Representations (ICLR 2015).", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Cutting-off Redundant Repeating Generations for Neural Abstractive Summarization", "authors": [], "year": 2012, "venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (ACL & IJCNLP 2015)", "volume": "", "issue": "", "pages": "76--85", "other_ids": {}, "num": null, "urls": [], "raw_text": "et al.2017] Piji Li, Wai Lam, Lidong Bing, and Zihao Wang. 2017. Deep Recurrent Generative Decoder for Abstractive Text Summarization. In Proceedings of the 2017 Conference on Empirical Methods in Natural Lan- guage Processing (EMNLP 2017), pages 2081-2090. [Luong et al.2015] Thang Luong, Hieu Pham, and Christo- pher D. Manning. 2015. Effective Approaches to Attention-based Neural Machine Translation. In Pro- ceedings of the 2015 Conference on Empirical Methods in Natural Language Processing (EMNLP 2015), pages 1412-1421. [Mi et al.2016] Haitao Mi, Baskaran Sankaran, Zhiguo Wang, and Abe Ittycheriah. 2016. Coverage Embed- ding Models for Neural Machine Translation. In Pro- ceedings of the 2016 Conference on Empirical Methods in Natural Language Processing (EMNLP 2016), pages 955-960. [Nallapati et al.2016] Ramesh Nallapati, Bowen Zhou, Ci- cero dos Santos, Caglar Gulcehre, and Bing Xi- ang. 2016. Abstractive Text Summarization Using Sequence-to-Sequence RNNs and Beyond. In Proceed- ings of The 20th SIGNLL Conference on Computational Natural Language Learning, pages 280-290. [Napoles et al.2012] Courtney Napoles, Matthew Gorm- ley, and Benjamin Van Durme. 2012. Annotated Gi- gaword. In Proceedings of the Joint Workshop on Au- tomatic Knowledge Base Construction and Web-scale Knowledge Extraction, AKBC-WEKEX '12, pages 95- 100. [Rush et al.2015] Alexander M. Rush, Sumit Chopra, and Jason Weston. 2015. A Neural Attention Model for Ab- stractive Sentence Summarization. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing (EMNLP 2015), pages 379-389. [Sennrich et al.2016] Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Neural Machine Translation of Rare Words with Subword Units. In Proceedings of the 54th Annual Meeting of the Association for Compu- tational Linguistics (ACL 2016), pages 1715-1725. [Shang et al.2015] Lifeng Shang, Zhengdong Lu, and Hang Li. 2015. Neural Responding Machine for Short- Text Conversation. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguis- tics and the 7th International Joint Conference on Natu- ral Language Processing (ACL & IJCNLP 2015), pages 1577-1586, July. [Sutskever et al.2014] Ilya Sutskever, Oriol Vinyals, and Quoc V. Le. 2014. Sequence to Sequence Learning with Neural Networks. In Advances in Neural Informa- tion Processing Systems 27 (NIPS 2014), pages 3104- 3112. [Suzuki and Nagata2017] Jun Suzuki and Masaaki Nagata. 2017. Cutting-off Redundant Repeating Generations for Neural Abstractive Summarization. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics (EACL 2017), pages 291-297. [Takase et al.2016] Sho Takase, Jun Suzuki, Naoaki Okazaki, Tsutomu Hirao, and Masaaki Nagata. 2016. Neural Headline Generation on Abstract Meaning Rep- resentation. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing (EMNLP 2016), pages 1054-1059. [Tu et al.2016] Zhaopeng Tu, Zhengdong Lu, Yang Liu, Xiaohua Liu, and Hang Li. 2016. Modeling Coverage for Neural Machine Translation. In Proceedings of the 54th Annual Meeting of the Association for Computa- tional Linguistics (ACL 2016), pages 76-85.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Neural Machine Translation with Reconstruction", "authors": [], "year": 2017, "venue": "Thirty-First AAAI Conference on Artificial Intelligence (AAAI 2017)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "et al.2017] Zhaopeng Tu, Yang Liu, Lifeng Shang, Xi- aohua Liu, and Hang Li. 2017. Neural Machine Trans- lation with Reconstruction. In Thirty-First AAAI Con- ference on Artificial Intelligence (AAAI 2017), pages", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Selective Encoding for Abstractive Sentence Summarization", "authors": [ { "first": "Rongxiang", "middle": [], "last": "Weng", "suffix": "" }, { "first": "Shujian", "middle": [], "last": "Huang", "suffix": "" }, { "first": "Zaixiang", "middle": [], "last": "Zheng", "suffix": "" }, { "first": "Xinyu", "middle": [], "last": "Dai", "suffix": "" }, { "first": "Jiajun", "middle": [], "last": "Chen", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "1095--1104", "other_ids": {}, "num": null, "urls": [], "raw_text": "Pacific Asia Conference on Language, Information and Computation Hong Kong, 1-3 December 2018 Copyright 2018 by the authors [Weng et al.2017] Rongxiang Weng, Shujian Huang, Zaix- iang Zheng, Xinyu Dai, and Jiajun Chen. 2017. Neural Machine Translation with Word Predictions. In Pro- ceedings of the 2017 Conference on Empirical Methods in Natural Language Processing (EMNLP 2017), pages 136-145. [Zhou et al.2017] Qingyu Zhou, Nan Yang, Furu Wei, and Ming Zhou. 2017. Selective Encoding for Abstractive Sentence Summarization. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (ACL 2017), pages 1095-1104.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "32nd Pacific Asia Conference on Language, Information and Computation Hong Kong", "authors": [], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "32nd Pacific Asia Conference on Language, Information and Computation Hong Kong, 1-3 December 2018 Copyright 2018 by the authors", "links": null } }, "ref_entries": { "FIGREF0": { "text": "Overview of EncDec+SPM. The module inside the dashed rectangular box represents the SPM.", "type_str": "figure", "uris": null, "num": null }, "FIGREF2": { "text": "Comparison between EncDec and EncDec+SPM on the number of sentences that potentially contain the odd-gen. The smaller examples mean reduction of the odd-gen.", "type_str": "figure", "uris": null, "num": null }, "FIGREF3": { "text": "Examples of generated summaries. \"Gold\" indicates the reference headline. The proposed EncDec+SPM model successfully reduced odd-gen.", "type_str": "figure", "uris": null, "num": null }, "FIGREF4": { "text": "Source-side prediction of EncDec+SPM Figure 4: Visualization of EncDec and EncDec+SPM.", "type_str": "figure", "uris": null, "num": null }, "FIGREF6": { "text": "SPM aligns \"election\" with \"vote\", whereas EncDec aligns \"vote\" with sentence period.", "type_str": "figure", "uris": null, "num": null }, "TABREF1": { "text": "", "num": null, "content": "", "type_str": "table", "html": null }, "TABREF2": { "text": "summarizes results for all test data. The table consists of three parts split by horizontal lines. The top and middle rows show the results on our training procedure, and the bottom row shows the results reported in previous studies. Note that the top and middle rows are not directly comparable to the bottom row due to the differences in preprocessing and vocabulary settings. The top row of Table 2 shows that EncDec+SPM outperformed both EncDec and EncDec+sGate. This result indicates that the SPM can improve the performance of EncDec. Moreover, it is noteworthy that EncDec+sGate+SPM achieved the best performance in all metrics even though EncDec+sGate consists of essentially the same architecture as the current EncDec 45.74 23.80 42.95 34.52 16.77 32.19 45.62 24.26 42.87 EncDec+sGate (our impl. of SEASS) 45.98 24.17 43.16 35.00 17.24 32.72 45.96 24.63 43.18 EncDec+SPM", "num": null, "content": "
11 https://github.com/rsennrich/
subword-nmt
", "type_str": "table", "html": null }, "TABREF3": { "text": "Full length ROUGE F1 evaluation results. The top and middle rows show the results on our evaluation setting. \u2020 is the proposed model. The bottom row shows published scores reported in previous studies. Note that (1) SEASS consists of essentially the same architecture as our implemented EncDec+sGate, and (2) the top and middle rows are not directly comparable to the bottom row due to differences in preprocessing and vocabulary settings. (see discussions in Section 5.2).", "num": null, "content": "", "type_str": "table", "html": null }, "TABREF5": { "text": "Characteristics of each dataset used in our experiments", "num": null, "content": "
", "type_str": "table", "html": null }, "TABREF6": { "text": "summarizes some examples of the obtained alignments. The table shows that the SPM aligns various types of word pairs, such as verb inflection and paraphrasing to the shorter form.", "num": null, "content": "
PACLIC 32
", "type_str": "table", "html": null }, "TABREF7": { "text": "Examples of the alignment that the SPM acquired", "num": null, "content": "
PACLIC 32
", "type_str": "table", "html": null } } } }