{ "paper_id": "2021", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T15:34:47.212500Z" }, "title": "RewardsOfSum: Exploring Reinforcement Learning Rewards for Summarisation", "authors": [ { "first": "Jacob", "middle": [], "last": "Parnell", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Technology Sydney", "location": { "region": "NSW", "country": "Australia" } }, "email": "" }, { "first": "Inigo", "middle": [ "Jauregi" ], "last": "Unanue", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Technology Sydney", "location": { "region": "NSW", "country": "Australia" } }, "email": "" }, { "first": "Massimo", "middle": [], "last": "Piccardi", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Technology Sydney", "location": { "region": "NSW", "country": "Australia" } }, "email": "massimo.piccardi@uts.edu.au" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "To date, most abstractive summarisation models have relied on variants of the negative loglikelihood (NLL) as their training objective. In some cases, reinforcement learning has been added to train the models with an objective that is closer to their evaluation measures (e.g. ROUGE). However, the reward function to be used within the reinforcement learning approach can play a key role for performance and is still partially unexplored. For this reason, in this paper, we propose two reward functions for the task of abstractive summarisation: the first function, referred to as RwB-Hinge, dynamically selects the samples for the gradient update. The second function, nicknamed RISK, leverages a small pool of strong candidates to inform the reward. In the experiments, we probe the proposed approach by fine-tuning an NLL pre-trained model over nine summarisation datasets of diverse size and nature. The experimental results show a consistent improvement over the negative loglikelihood baselines.", "pdf_parse": { "paper_id": "2021", "_pdf_hash": "", "abstract": [ { "text": "To date, most abstractive summarisation models have relied on variants of the negative loglikelihood (NLL) as their training objective. In some cases, reinforcement learning has been added to train the models with an objective that is closer to their evaluation measures (e.g. ROUGE). However, the reward function to be used within the reinforcement learning approach can play a key role for performance and is still partially unexplored. For this reason, in this paper, we propose two reward functions for the task of abstractive summarisation: the first function, referred to as RwB-Hinge, dynamically selects the samples for the gradient update. The second function, nicknamed RISK, leverages a small pool of strong candidates to inform the reward. In the experiments, we probe the proposed approach by fine-tuning an NLL pre-trained model over nine summarisation datasets of diverse size and nature. The experimental results show a consistent improvement over the negative loglikelihood baselines.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "The current state-of-the-art neural text summarisation models have been refined to excel at either the extractive or abstractive styles, or even both (Zhang et al., 2020a; Lewis et al., 2020; Raffel et al., 2020) . Along with contemporary summarisation datasets (Narayan et al., 2018a; Grusky et al., 2018; Fabbri et al., 2019) , the advent of large pre-trained language models, and their subsequent derivations (Liu and Lapata, 2019; Park, 2020) , has allowed summarisation to become a more practical and reasonable task to implement, without compromising, and often improving, the accuracy. However, these models usually employ the standard negative loglikelihood (NLL) as their training objective, which aims to maximise the likelihood of each token in a given ground-truth reference. Despite its efficacy, the NLL fails to account for synonymous tokens and other potentially valid variations, and strongly biases the model towards the ground-truth reference (Ranzato et al., 2016) . Furthermore, the NLL operates as a token-level objective during training, which promotes an inconsistent comparison with sequence-level evaluation metrics, such as ROUGE (Lin, 2004) .", "cite_spans": [ { "start": 150, "end": 171, "text": "(Zhang et al., 2020a;", "ref_id": "BIBREF29" }, { "start": 172, "end": 191, "text": "Lewis et al., 2020;", "ref_id": "BIBREF14" }, { "start": 192, "end": 212, "text": "Raffel et al., 2020)", "ref_id": "BIBREF25" }, { "start": 262, "end": 285, "text": "(Narayan et al., 2018a;", "ref_id": "BIBREF20" }, { "start": 286, "end": 306, "text": "Grusky et al., 2018;", "ref_id": "BIBREF11" }, { "start": 307, "end": 327, "text": "Fabbri et al., 2019)", "ref_id": "BIBREF9" }, { "start": 412, "end": 434, "text": "(Liu and Lapata, 2019;", "ref_id": "BIBREF19" }, { "start": 435, "end": 446, "text": "Park, 2020)", "ref_id": "BIBREF22" }, { "start": 962, "end": 984, "text": "(Ranzato et al., 2016)", "ref_id": "BIBREF26" }, { "start": 1157, "end": 1168, "text": "(Lin, 2004)", "ref_id": "BIBREF18" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In order to address the inconsistency between token-level training and sequence-level evaluation, reinforcement learning (RL) has been adopted in summarisation and other language generation tasks to afford the optimization of sequence-level metrics during training (Paulus et al., 2018; Pasunuru and Bansal, 2018) . Reinforcement learning has proved successful at improving the accuracy of language generation tasks, such as summarisation (Paulus et al., 2018; Arumae and Liu, 2018; Pasunuru and Bansal, 2018) and machine translation (Ranzato et al., 2016; Edunov et al., 2018) . However, balancing exploration and exploitation remains imperative to the successful choice of an effective reward. When standard RL techniques, such as REINFORCE (Williams, 1992) , are implemented in natural language generation tasks, the required expectation becomes intractable due to large vocabulary sizes. Therefore, the application of REINFORCE is typically reduced to calculating the approximate expectation with respect to only a single predicted sequence. To teach the model to understand the importance of sample variation among synonymous tokens, we instead choose to implement an objective function which includes multiple predicted sequences, allowing for a scenario in which several valid candidate summaries can be considered. Another consideration is that the success of techniques such as REINFORCE strongly depends on the use of an effective and appropriate reward. Designing such a reward, one which enables the model to manipulate multiple sequences and yet provides a positive and informative outcome in the process, is therefore necessary for producing better results. This allows us to modify the reinforcement learning framework in such a way that enforces only a higher weighting to those predicted sequences which obtain a higher reward. As such, we apply two techniques to summarisation; RwB-Hinge, which applies a hinge-loss modification to the classical REINFORCE with baseline (Rennie et al., 2017) to selectively apply the model gradients, and Expected Risk Minimization (RISK) (Edunov et al., 2018) , which leverages a small pool of strong sampled candidates to smartly inform the reward function. We aptly refer to our framework as RewardsOfSum, to hint at the exploration of suitable reward functions for summarisation. Empirically, we show that the two proposed variants perform better than standard negative log-likelihood baselines over a range of datasets of diverse size and nature.", "cite_spans": [ { "start": 265, "end": 286, "text": "(Paulus et al., 2018;", "ref_id": "BIBREF24" }, { "start": 287, "end": 313, "text": "Pasunuru and Bansal, 2018)", "ref_id": "BIBREF23" }, { "start": 439, "end": 460, "text": "(Paulus et al., 2018;", "ref_id": "BIBREF24" }, { "start": 461, "end": 482, "text": "Arumae and Liu, 2018;", "ref_id": null }, { "start": 483, "end": 509, "text": "Pasunuru and Bansal, 2018)", "ref_id": "BIBREF23" }, { "start": 534, "end": 556, "text": "(Ranzato et al., 2016;", "ref_id": "BIBREF26" }, { "start": 557, "end": 577, "text": "Edunov et al., 2018)", "ref_id": "BIBREF8" }, { "start": 743, "end": 759, "text": "(Williams, 1992)", "ref_id": "BIBREF28" }, { "start": 1988, "end": 2009, "text": "(Rennie et al., 2017)", "ref_id": "BIBREF27" }, { "start": 2090, "end": 2111, "text": "(Edunov et al., 2018)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In recent years, there has been some work in summarisation to separate from the traditional negative log-likelihood (NLL) objective function, and mollify its dependency on ground-truth references. Several implementations of reinforcement learning in summarisation involved optimizing discrete metrics, such as the standard ROUGE (Paulus et al., 2018; Narayan et al., 2018b) . Others have introduced novel rewards into the reinforcement learning framework, such as question-focused rewards (Arumae and Liu, 2018), saliency and entailment rewards (Pasunuru and Bansal, 2018) , and even distributional semantic rewards . Gao et al. (2020) also present a novel unsupervised metric for summarisation which correlates highly with discrete evaluation metrics if adopted in a reinforcement learning approach.", "cite_spans": [ { "start": 329, "end": 350, "text": "(Paulus et al., 2018;", "ref_id": "BIBREF24" }, { "start": 351, "end": 373, "text": "Narayan et al., 2018b)", "ref_id": "BIBREF21" }, { "start": 545, "end": 572, "text": "(Pasunuru and Bansal, 2018)", "ref_id": "BIBREF23" }, { "start": 618, "end": 635, "text": "Gao et al. (2020)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "On the other hand, there has been much work in leveraging large, pre-trained language models (LM) (Devlin et al., 2019; Lewis et al., 2020; Raffel et al., 2020) to improve the quality and performance of summarisation models. Utilizing pretrained language models requires significantly less engineering effort to continually improve over stateof-the-art baselines. Typically, these approaches include using novel pre-training objectives (Zhang et al., 2020a; Raffel et al., 2020; Zhu et al., 2020) or implementing successful reinforcement learning techniques (Bae et al., 2019) . found that optimizing semantic rewards in reinforcement learning, using BERTScore (Zhang et al., 2020b) , does not necessarily correlate with the ROUGE score at test time. As such, the choice of reward in a reinforcement learning approach should attempt to carefully align with the evaluation metric.", "cite_spans": [ { "start": 98, "end": 119, "text": "(Devlin et al., 2019;", "ref_id": "BIBREF6" }, { "start": 120, "end": 139, "text": "Lewis et al., 2020;", "ref_id": "BIBREF14" }, { "start": 140, "end": 160, "text": "Raffel et al., 2020)", "ref_id": "BIBREF25" }, { "start": 436, "end": 457, "text": "(Zhang et al., 2020a;", "ref_id": "BIBREF29" }, { "start": 458, "end": 478, "text": "Raffel et al., 2020;", "ref_id": "BIBREF25" }, { "start": 479, "end": 496, "text": "Zhu et al., 2020)", "ref_id": "BIBREF31" }, { "start": 558, "end": 576, "text": "(Bae et al., 2019)", "ref_id": "BIBREF5" }, { "start": 661, "end": 682, "text": "(Zhang et al., 2020b)", "ref_id": "BIBREF30" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "How best to inform the reward via the reward function, is critical to the performance of models in an RL framework. In our work, we aim to stray from the typical sole NLL objective, and by leveraging a pre-trained language model in a reinforcement learning framework, explore different RL-based reward functions for summarisation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "In order to improve over the negative log-likelihood baseline models, we aim to implement a reinforcement learning framework that adopts the standard evaluation metric, ROUGE, as a reward during training. We aim to keep consistent with previous implementations of reinforcement learning in summarisation, and assume ROUGE-L F1 to be the reward metric in the following work. In Sections 3.1 and 3.2, we consider the following standard notations: x is defined as an input source document, y * ,\u0177, and y s are referred to as the ground-truth reference, argmax prediction, and sampled sequence, respectively, and r(y) refers to the reward of sequence y, computed with respect to the ground-truth reference, y * . By exploiting a combination of sampling and predictions, we aim to enhance training diversity in the vein of the work of ; ; Holtzman et al. (2020).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proposed Reinforcement Learning Training", "sec_num": "3" }, { "text": "We adopt the standard self-critical policy gradient objective (Rennie et al., 2017) , notably applied to summarisation by Paulus et al. (2018) :", "cite_spans": [ { "start": 62, "end": 83, "text": "(Rennie et al., 2017)", "ref_id": "BIBREF27" }, { "start": 122, "end": 142, "text": "Paulus et al. (2018)", "ref_id": "BIBREF24" } ], "ref_spans": [], "eq_spans": [], "section": "RwB-Hinge", "sec_num": "3.1" }, { "text": "\u03b1 = \u2212[r(y s ) \u2212 r(\u0177)]", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "RwB-Hinge", "sec_num": "3.1" }, { "text": "(1)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "RwB-Hinge", "sec_num": "3.1" }, { "text": "L RwB = \u03b1 n t=1 log p(y s t |y 1 , . . . , y t\u22121 , x) (2)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "RwB-Hinge", "sec_num": "3.1" }, { "text": "In (1), y s and\u0177 denote a sampled sequence and the argmax prediction of the current model, respectively. The reward of the argmax, r(\u0177), is used as a \"baseline\" for the reward of the sample, r(y s ). It is easy to see that if r(y s ) \u2212 r(\u0177) > 0, the sign of this loss is negative, treating y s as a \"good\" prediction and leading to an increase of its probability. Conversely, if the sign is positive, y s is deemed as a \"bad\" prediction and its probability is decreased.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "RwB-Hinge", "sec_num": "3.1" }, { "text": "However, in abstractive summarisation it is not trivial to discriminate between a good and a bad summary when the reward score is in an intermediate range. To avoid inappropriately penalising acceptable predictions, we propose incorporating a hinge loss in 1:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "RwB-Hinge", "sec_num": "3.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "\u03b1 = \u2212 max [0, (r(y s ) \u2212 r(\u0177))]", "eq_num": "(3)" } ], "section": "RwB-Hinge", "sec_num": "3.1" }, { "text": "The hinge loss allows the model to limit the gradient updates to only the predictions that are considered as good. In this way, we avoid the risk of unstable training updates and hope to afford a clearer trajectory towards a well-trained model.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "RwB-Hinge", "sec_num": "3.1" }, { "text": "We also utilise a classical structured loss function that has been shown to perform well in sequenceto-sequence learning tasks (Edunov et al., 2018) :", "cite_spans": [ { "start": 127, "end": 148, "text": "(Edunov et al., 2018)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Expected RISK Minimization", "sec_num": "3.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "L RISK = y\u2208U (x) \u2212r(y) \u2022 p(y|x, \u03b8)", "eq_num": "(4)" } ], "section": "Expected RISK Minimization", "sec_num": "3.2" }, { "text": "In 4, y represents one of multiple candidate summaries, sampled or predicted with the methods defined in Section 4.2 (e.g. argmax, Gumbel-Softmax (Jang et al., 2017) ), that form the total candidate summary set U (x). The conditional probability of the predicted summary is noted as p(y|x, \u03b8).", "cite_spans": [ { "start": 146, "end": 165, "text": "(Jang et al., 2017)", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "Expected RISK Minimization", "sec_num": "3.2" }, { "text": "This conditional probability is defined in (5), where m is the number of tokens in the summary. The sum of logarithms in (6) is divided by the total number of tokens in the sequence, and is scaled back using an exponential function, allowing each candidate summary to be compared fairly in the objective function and avoiding underflow.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Expected RISK Minimization", "sec_num": "3.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "p(y|x, \u03b8) = f (y, x, \u03b8) y \u2208U (x) f (y , x, \u03b8) (5) \u03b7 = m j=1 logp(u j |u 1 , . . . , u j\u22121 , x, \u03b8) (6) f (y, x, \u03b8) = exp[ \u03b7 m ]", "eq_num": "(7)" } ], "section": "Expected RISK Minimization", "sec_num": "3.2" }, { "text": "By using this objective function, the model is taught to assign higher probability to the candidate summaries that obtain higher rewards. This objective does not require a baseline or hinge loss to select the predictions, since using multiple candidates already exposes the model to different, potentially valid predictions. Edunov et al. (2018) demonstrates the effectiveness of this approach at sentence level for both neural machine translation and summarisation. For the summarisation task, Edunov et al. (2018) compute the reward at sentence-level since their dataset has single-sentence references. However, as the reward function is agnostic to single or multi-sentence predictions, we can easily translate the RISK objective function to be used at summary level.", "cite_spans": [ { "start": 325, "end": 345, "text": "Edunov et al. (2018)", "ref_id": "BIBREF8" }, { "start": 495, "end": 515, "text": "Edunov et al. (2018)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Expected RISK Minimization", "sec_num": "3.2" }, { "text": "Similar to previous reinforcement learning implementations (Paulus et al., 2018; , we, too, utilise a mixed learning objective function, as shown in (8). This mixed approach helps the model to not deviate too much from the reference summaries, given a \u03b3 balancing coefficient chosen with a strict validation criterion (Appendix A). The L RL term refers to either the RwB-Hinge or RISK training objective function.", "cite_spans": [ { "start": 59, "end": 80, "text": "(Paulus et al., 2018;", "ref_id": "BIBREF24" } ], "ref_spans": [], "eq_spans": [], "section": "Overall Training Objective", "sec_num": "3.3" }, { "text": "L mixed = \u03b3L XEN T + (1 \u2212 \u03b3)L RL (8)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Overall Training Objective", "sec_num": "3.3" }, { "text": "4 Experimental Setup", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Overall Training Objective", "sec_num": "3.3" }, { "text": "Inspired by the recent work from Zhang et al. (2020a), we utilise nine of the summarisation datasets reported in their paper. The nine datasets have been chosen based on the different lengths of their reference summaries, to provide enough of a variation to demonstrate the applicability of the presented methods. We split the datasets into three classes: \"short\", \"medium\", and \"long\". Short datasets have reference summaries \u2264 64 tokens, medium datasets > 64 and \u2264 128 tokens, and long datasets > 128 tokens.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Datasets", "sec_num": "4.1" }, { "text": "In order to promote exploration across the vocabulary distribution, we employ three simple methodologies to provide candidate sequences for our training objectives. Argmax: As is the standard with the majority of sequence generation tasks, a predicted sentence can be easily provided by allowing the model to make hard decisions (e.g. argmax) over the probability distribution generated by the decoder. This allows us to use it as a baseline for the following experiments. In its simplest form the argmax is defined as:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sampling Methods", "sec_num": "4.2" }, { "text": "y j = argmax y p(y|x, y * j\u22121 , \u03b8) j = 1, . . . , n (9)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sampling Methods", "sec_num": "4.2" }, { "text": "where we use \"teacher forcing\" for the predictions. 2nd-Best: Similar to the argmax, we employ a k-best approach to sample the second best-argmax from the same probability distribution generated by the decoder. This allows us to choose different, yet similarly weighted words from the decoder to introduce variability between produced summaries:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sampling Methods", "sec_num": "4.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "y s j = argmax k=2 p(y|x, y * j\u22121 , \u03b8) j = 1, . . . , n", "eq_num": "(10" } ], "section": "Sampling Methods", "sec_num": "4.2" }, { "text": ") Gumbel-Softmax: We also utilise a recent reparameterization technique known as the Gumbel-Softmax (Jang et al., 2017 ) that allows sampling soft latent categorical variables by transforming samples from a Gumbel distribution. Compared to the standard \"hard\" predictions, this approach is differentiable and allows controlling the sparsity of the samples by a temperature parameter, \u03c4 :", "cite_spans": [ { "start": 100, "end": 118, "text": "(Jang et al., 2017", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "Sampling Methods", "sec_num": "4.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "p i j = exp((log(p i j ) + g i )/\u03c4 V v=1 exp((log(p v j ) + g v )/\u03c4", "eq_num": "(11)" } ], "section": "Sampling Methods", "sec_num": "4.2" }, { "text": "In (11), g i is a sample from the zero-mean, unitscale Gumbel distribution, p i j is the probability dis-tribution for a given token i at slot j, and the temperature parameter, \u03c4 , controls the sparsity of the output soft variable,p i j . In our experiments, we have set \u03c4 to 0.1 to enforce sparsity.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sampling Methods", "sec_num": "4.2" }, { "text": "The abstractive text summarisation model we use for our experiments is PEGASUS, a large pretrained Transformer encoder-decoder architecture that has recently reported state-of-the-art results over a number of datasets. Please refer to Zhang et al. (2020a) for details. All hyperparameters used in our experiments can be found in Appendix B.", "cite_spans": [ { "start": 235, "end": 255, "text": "Zhang et al. (2020a)", "ref_id": "BIBREF29" } ], "ref_spans": [], "eq_spans": [], "section": "Baseline Model and Training Runs", "sec_num": "4.3" }, { "text": "We employ two training approaches to test the solidity of the proposed methods. The first is a fewshot learning approach that adopts limited, fixed numbers of training samples (1000) and training iterations (2000) for fine-tuning the model. The second is a full-data learning approach, that utilises all available training data, and exhausts the objective function until convergence over the validation set. In all experiments, we first fine-tune a pretrained PEGASUS model with the NLL, and then we further fine-tune the NLL model with one of the proposed approaches. We train the model in this way to avoid the slow and inefficient training often associated with policy gradient objectives, and as a result, adhere to the standard warm-start NLL training adopted in previous reinforcement learning-based approaches (Paulus et al., 2018; .", "cite_spans": [ { "start": 817, "end": 838, "text": "(Paulus et al., 2018;", "ref_id": "BIBREF24" } ], "ref_spans": [], "eq_spans": [], "section": "Baseline Model and Training Runs", "sec_num": "4.3" }, { "text": "In the following experiments, we refer to PEGA-SUS as PEG, and its NLL-tuned models with the suffixes -few_shot and -full_data. The proposed approaches are in turn noted as RwB-Hinge and RISK.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Baseline Model and Training Runs", "sec_num": "4.3" }, { "text": "Experiment Arg-max 2nd-Best G-S RwB-Hinge RISK-2 RISK-3 ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Baseline Model and Training Runs", "sec_num": "4.3" }, { "text": "Tables 3, 4, and 5 show the results of each method in comparison to the NLL-tuned baseline for the nine reported datasets. Each table reports the Table 4 : Results on medium datasets: CNN/DM, Reddit-TIFU, and Newsroom. Here we compare the limited resource (PEG few_shot ) and full-data (PEG full_data ) approaches with our different implementations. ( \u2020) means that the differences are statistically significant with respect to the baseline with a p-value < 0.05 over a bootstrap hypothesis test. Best ROUGE-1/2/L scores are bolded. statistical test for summarisation compared to a t-test (Dror et al., 2018) . Figure 1 compares the effect that each finetuning method has had over the production of novel n-grams during test time (a property nicknamed as n-gram novelty). For medium sized datasets in particular, the reinforcement learning approaches appear to, on average, facilitate the production of more distinct uni-, bi-, and tri-grams at test time, compared to the NLL baseline. Whilst n-gram novelty is typically used in summarisation to showcase test-time summary abstractiveness, the results in Figure 1 highlight that training with objectives that promote sample variation leads to models capable of producing more novel n-grams (up to 13.8 pp in tri-gram novelty over CNN/DM). This is supported by the qualitative example in Table 6 which shows that the proposed fine-tuning methods can achieve greater diversity of summary predictions, whilst still improving over the baseline NLL ROUGE scores. It seems that the proposed fine-tuning methods have allowed the model to effectively weigh the predicted summaries during training, and when combined with the \"stable\" NLL in a mixed-loss approach, this has been able to produce well-rounded predictions, diverse enough to stray from the original baseline and the reference summaries.", "cite_spans": [ { "start": 589, "end": 608, "text": "(Dror et al., 2018)", "ref_id": "BIBREF7" } ], "ref_spans": [ { "start": 146, "end": 153, "text": "Table 4", "ref_id": "TABREF9" }, { "start": 611, "end": 619, "text": "Figure 1", "ref_id": "FIGREF0" }, { "start": 1105, "end": 1113, "text": "Figure 1", "ref_id": "FIGREF0" }, { "start": 1337, "end": 1344, "text": "Table 6", "ref_id": "TABREF9" } ], "eq_spans": [], "section": "Results", "sec_num": "5" }, { "text": "Model AESLC Gigaword XSum R-1 R-2 R-L R-1 R-2 R-L R-1 R-2 R-L PEG", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "5" }, { "text": "Model CNN/DM Reddit-TIFU Newsroom R-1 R-2 R-L R-1 R-2 R-L R-1 R-2 R-L PEG", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "5" }, { "text": "ArXiv Billsum R-1 R-2 R-L R-1 R-2 R-L R-1 R-2 R-L PEG", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model Pubmed", "sec_num": null }, { "text": "In addition, Figure 2 shows a performance comparison with respect to the length of the reference summaries for the full-data approach over a medium size dataset (CNN/DM). We see that our fine-tuning methods have led, on average, to higher Figure 2 : Comparison of each method for the full-data approach over a medium size dataset (CNN/DM). The methods are as follows: NLL (baseline), RwB-Hinge, RISK-2, and RISK-3. We see that the reinforcement learning approaches have led, on average, to higher ROUGE-L scores for the longer summaries compared to the NLL baseline. ROUGE-L scores for the longer summaries (up to 2.3 ROUGE-L points for summaries between 80-100 tokens, and up to 6.2 points for summaries over 100 tokens). Likely, the proposed methods have been able to amend the reported tendency of the NLL models to curtail the prediction of long summaries.", "cite_spans": [], "ref_spans": [ { "start": 13, "end": 21, "text": "Figure 2", "ref_id": null }, { "start": 239, "end": 247, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "Model Pubmed", "sec_num": null }, { "text": "Comparing multiple fine-tuning methods is useful for showcasing the improvements that reinforcement learning can play on a generation task Source Document Dougie Freedman is on the verge of agreeing a new two-year deal to remain at Nottingham Forest. Freedman has stabilised Forest since he replaced cult hero Stuart Pearce and the club's owners are pleased with the job he has done at the City Ground. Dougie Freedman is set to sign a new deal at Nottingham Forest. Freedman has impressed at the City Ground since replacing Stuart Pearce in February. They made an audacious attempt on the play-off places when Freedman replaced Pearce but have tailed off in recent weeks. That has not prevented Forest's ownership making moves to secure Freedman on a contract for the next two seasons. Table 6 : Example of the performance of each method from the CNN/DailyMail dataset for the full-data approach, compared to the reference summary and NLL baseline. Words highlighted in blue indicate that they are not present in the baseline NLL summary. Here we choose a typical method that aligns the best with the average NLL baseline score, and compare how the methods pit against it. We see that there is a relative increase in ROUGE scores, whilst diversifying the output. Table 8 : Comparisons between REINFORCE with baseline with and without the hinge-loss modification on the validation set for short, medium, and long datasets, to validate the use of the hinge-loss modification in our method. This is run over the full-data baselines, and shows that for the majority of dataset classes, the adopted hinge-loss modification leads to improvements in performance.", "cite_spans": [], "ref_spans": [ { "start": 787, "end": 794, "text": "Table 6", "ref_id": "TABREF9" }, { "start": 1264, "end": 1271, "text": "Table 8", "ref_id": "TABREF9" } ], "eq_spans": [], "section": "Model Pubmed", "sec_num": null }, { "text": "like summarisation. However, no single method has outperformed all others over all the datasets and in both the few-shot and full-data approaches. Whilst all methods have achieved interesting im-provements over the baseline figures, we have run a comparison over the validation set to see if their relative rankings could be a reliable indicator of the relative rankings of the test set scores reported in Tables 3, 4, and 5. Table 7 shows the results for one dataset per class size, showing that for the short and medium size datasets (\u2264 128 tokens), either of the RISK methods could be chosen to fine-tune the model. This contrasts to the longer datasets where the hinge-loss modification has achieved the best results. In both cases, the results are in good agreement with those on the test sets. Lastly, in Table 8 , we further validate our use of the hinge-loss adaptation to the classical RE-INFORCE with baseline method -a staple in the reinforcement learning literature of language generation tasks (Paulus et al., 2018) . Over the same three datasets of Table 7 , we see that in the majority of instances the hinge-loss modification has been distinctively better than the standard approach. This confirms our intuition that the adoption of a hinge loss to restrict the gradient updates to \"good\" predictions only is beneficial to the improvement of ROUGE scores.", "cite_spans": [ { "start": 1007, "end": 1028, "text": "(Paulus et al., 2018)", "ref_id": "BIBREF24" } ], "ref_spans": [ { "start": 426, "end": 433, "text": "Table 7", "ref_id": "TABREF8" }, { "start": 811, "end": 818, "text": "Table 8", "ref_id": "TABREF9" }, { "start": 1063, "end": 1070, "text": "Table 7", "ref_id": "TABREF8" } ], "eq_spans": [], "section": "Dataset", "sec_num": null }, { "text": "In this paper, we have proposed two variants to the reinforcement learning approaches typically used in sequence-to-sequence learning tasks. The two proposed approaches -nicknamed RwB-Hinge and RISK -have been designed to improve the reinforcement learning rewards by selecting and diversifying the predictions used during the fine-tuning of the model. In a set of automated summarisation experiments over nine, diverse datasets, the approaches have consistently led to improved performance, and also diversified the generated summaries. We note that, despite its commonplace use for summarisation evaluation, utilizing ROUGE as reinforcement learning reward does not easily translate into improved performance. For this reason, in the near future we plan to explore other contemporary score functions, such as BERTScore (Zhang et al., 2020b) , in an attempt to build more effective rewards.", "cite_spans": [ { "start": 821, "end": 842, "text": "(Zhang et al., 2020b)", "ref_id": "BIBREF30" } ], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "6" }, { "text": "To determine an appropriate \u03b3 term for our mixed loss implementation, we have run tests with different values over the validation set for each dataset. To determine the best value, we have utilised the standard REINFORCE (Williams, 1992) approach combined linearly with the negative log-likelihood. We have chosen to optimise REINFORCE here since, being a close relative, but not the same as the algorithms we have used during training, it may help to eschew overfitting. In the interest of time, we have utilised the validation scores of a single seed to determine the \u03b3 values. For the few-shot implementation in Table A .1, we have fixed the number of examples to fine-tune on (1,000) and the number of training iterations (2,000) exactly as in the standard baseline approach defined in Section 4. For the full-data approach in Table A .3, we have utilised all the training data, but, again in the interest of time, we have capped the number of training iterations to either: a) the same training time as the exhausted NLL tests reported in Table B .2, or b) 10,000 training iterations if the NLL training time exceeded 15,000 training iterations. Tables A.2 and A.4 show the best \u03b3 values from the validation runs for all datasets. For datasets where there was no clear winner in Tables A.1 and A.3, we have compromised over the best values (highlighted in blue). Table A .2: A summary of the corresponding gamma weights determined from the above few-shot validation tests.", "cite_spans": [ { "start": 221, "end": 237, "text": "(Williams, 1992)", "ref_id": "BIBREF28" } ], "ref_spans": [ { "start": 615, "end": 622, "text": "Table A", "ref_id": "TABREF9" }, { "start": 831, "end": 838, "text": "Table A", "ref_id": "TABREF9" }, { "start": 1044, "end": 1051, "text": "Table B", "ref_id": "TABREF9" }, { "start": 1368, "end": 1375, "text": "Table A", "ref_id": "TABREF9" } ], "eq_spans": [], "section": "A Validation Scores", "sec_num": null }, { "text": "AESLC ArXiv Billsum CNN/DM Gigaword Newsroom Pubmed Reddit-TIFU XSum 0.9 0.7 0.9 0.9 0.9 0.7 0.7 0.9 0.9 Table A .3: Validation scores of the baseline PEGASUS model, fine-tuned on all training examples provided with the dataset for as many training iterations as either; the NLL baseline tests in Section 4, or 10,000 training iterations for longer datasets (ArXiv, Billsum, Pubmed). Best scores are highlighted. Table A .4: A summary of the corresponding gamma weights determined from the above full-data validation tests.", "cite_spans": [], "ref_spans": [ { "start": 105, "end": 112, "text": "Table A", "ref_id": "TABREF9" }, { "start": 413, "end": 420, "text": "Table A", "ref_id": "TABREF9" } ], "eq_spans": [], "section": "A Validation Scores", "sec_num": null }, { "text": "AESLC ArXiv Billsum CNN/DM Gigaword Newsroom Pubmed Reddit-TIFU XSum 0.9 0.9 0.9 0.9 0.7 0.9 0.9 0.9 0.9", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A Validation Scores", "sec_num": null }, { "text": "In our experiments, we have utilised the same hyperparameters used in the original PEGASUS paper (Zhang et al., 2020a) . The exception to this is our use of a smaller batch size, constrained by computational resources. As batch size we have used 1, which has resulted in a drop in performance compared to that of the original paper. However, our fine-tuning approach is ensured to converge through the use of a convergence criterion. This is defined by a validation run that evaluates the model every 1000 training iterations, and monitors the progression of the validation loss over the entire training run. A model is deemed 'converged' if its validation loss does not decrease over 3000 training iterations. Table B .1: Model hyperparameters used in the few-shot experiments. All values except the fine-tuning steps are also used in the full-data approach.", "cite_spans": [ { "start": 97, "end": 118, "text": "(Zhang et al., 2020a)", "ref_id": "BIBREF29" } ], "ref_spans": [ { "start": 711, "end": 718, "text": "Table B", "ref_id": "TABREF9" } ], "eq_spans": [], "section": "B Model Hyperparameters", "sec_num": null } ], "back_matter": [ { "text": "Learning ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dataset", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Nottingham Forest are close to extending Dougie Freedman's contract. The Forest boss took over from former manager Stuart Pearce in February", "authors": [], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nottingham Forest are close to extending Dougie Freedman's contract. The Forest boss took over from former manager Stuart Pearce in February. Freedman has since lead the club to ninth in the Championship.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Dougie Freedman set to sign new deal at Nottingham Forest. Freedman has stabilised Forest since he replaced Stuart Pearce", "authors": [], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dougie Freedman set to sign new deal at Nottingham Forest. Freedman has stabilised Forest since he replaced Stuart Pearce.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Dougie Freedman is set to sign a new two-year deal at Nottingham Forest. The City Ground boss has stabilised the club since he replaced Stuart Pearce. Forest's owners are pleased with Freedman's job at the club", "authors": [], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dougie Freedman is set to sign a new two-year deal at Nottingham Forest. The City Ground boss has stabilised the club since he replaced Stuart Pearce. Forest's owners are pleased with Freedman's job at the club.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Dougie Freedman set to sign a new two-year deal at Nottingham Forest. Freedman has stabilised Forest since he replaced Stuart Pearce in February. Forest made an audacious attempt at the play-off places when Freedman replaced Pearce", "authors": [], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dougie Freedman set to sign a new two-year deal at Nottingham Forest. Freedman has stabilised Forest since he replaced Stuart Pearce in February. Forest made an audacious attempt at the play-off places when Freedman replaced Pearce.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Dougie Freedman set to sign new deal at Nottingham Forest. Freedman has stabilised the club since he replaced Stuart Pearce in February. The club's owners are pleased with the job Freedman has done at the City Ground. References Kristjan Arumae and Fei Liu", "authors": [], "year": 2018, "venue": "Proceedings of ACL 2018, Student Research Workshop", "volume": "", "issue": "", "pages": "105--111", "other_ids": { "DOI": [ "10.18653/v1/P18-3015" ] }, "num": null, "urls": [], "raw_text": "Dougie Freedman set to sign new deal at Nottingham Forest. Freedman has stabilised the club since he replaced Stuart Pearce in February. The club's owners are pleased with the job Freedman has done at the City Ground. References Kristjan Arumae and Fei Liu. 2018. Reinforced extrac- tive summarization with question-focused rewards. In Proceedings of ACL 2018, Student Research Workshop, pages 105-111, Melbourne, Australia. Association for Computational Linguistics.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Summary level training of sentence rewriting for abstractive summarization", "authors": [ { "first": "Sanghwan", "middle": [], "last": "Bae", "suffix": "" }, { "first": "Taeuk", "middle": [], "last": "Kim", "suffix": "" }, { "first": "Jihoon", "middle": [], "last": "Kim", "suffix": "" }, { "first": "Sanggoo", "middle": [], "last": "Lee", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2nd Workshop on New Frontiers in Summarization", "volume": "", "issue": "", "pages": "10--20", "other_ids": { "DOI": [ "10.18653/v1/D19-5402" ] }, "num": null, "urls": [], "raw_text": "Sanghwan Bae, Taeuk Kim, Jihoon Kim, and Sang- goo Lee. 2019. Summary level training of sentence rewriting for abstractive summarization. In Proceed- ings of the 2nd Workshop on New Frontiers in Sum- marization, pages 10-20, Hong Kong, China. Asso- ciation for Computational Linguistics.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "BERT: Pre-training of deep bidirectional transformers for language understanding", "authors": [ { "first": "Jacob", "middle": [], "last": "Devlin", "suffix": "" }, { "first": "Ming-Wei", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Kenton", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Kristina", "middle": [], "last": "Toutanova", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "4171--4186", "other_ids": { "DOI": [ "10.18653/v1/N19-1423" ] }, "num": null, "urls": [], "raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Associ- ation for Computational Linguistics.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "The hitchhiker's guide to testing statistical significance in natural language processing", "authors": [ { "first": "Rotem", "middle": [], "last": "Dror", "suffix": "" }, { "first": "Gili", "middle": [], "last": "Baumer", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rotem Dror, Gili Baumer, Segev Shlomov, and Roi Re- ichart. 2018. The hitchhiker's guide to testing statis- tical significance in natural language processing.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Classical structured prediction losses for sequence to sequence learning", "authors": [ { "first": "Sergey", "middle": [], "last": "Edunov", "suffix": "" }, { "first": "Myle", "middle": [], "last": "Ott", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Auli", "suffix": "" }, { "first": "David", "middle": [], "last": "Grangier", "suffix": "" }, { "first": "Marc'aurelio", "middle": [], "last": "Ranzato", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "355--364", "other_ids": { "DOI": [ "10.18653/v1/N18-1033" ] }, "num": null, "urls": [], "raw_text": "Sergey Edunov, Myle Ott, Michael Auli, David Grang- ier, and Marc'Aurelio Ranzato. 2018. Classical structured prediction losses for sequence to se- quence learning. In Proceedings of the 2018 Con- ference of the North American Chapter of the Asso- ciation for Computational Linguistics: Human Lan- guage Technologies, Volume 1 (Long Papers), pages 355-364, New Orleans, Louisiana. Association for Computational Linguistics.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Multi-news: A large-scale multi-document summarization dataset and abstractive hierarchical model", "authors": [ { "first": "Alexander", "middle": [], "last": "Fabbri", "suffix": "" }, { "first": "Irene", "middle": [], "last": "Li", "suffix": "" }, { "first": "Tianwei", "middle": [], "last": "She", "suffix": "" }, { "first": "Suyi", "middle": [], "last": "Li", "suffix": "" }, { "first": "Dragomir", "middle": [], "last": "Radev", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "1074--1084", "other_ids": { "DOI": [ "10.18653/v1/P19-1102" ] }, "num": null, "urls": [], "raw_text": "Alexander Fabbri, Irene Li, Tianwei She, Suyi Li, and Dragomir Radev. 2019. Multi-news: A large-scale multi-document summarization dataset and abstrac- tive hierarchical model. In Proceedings of the 57th Annual Meeting of the Association for Computa- tional Linguistics, pages 1074-1084, Florence, Italy. Association for Computational Linguistics.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "SU-PERT: Towards new frontiers in unsupervised evaluation metrics for multi-document summarization", "authors": [ { "first": "Yang", "middle": [], "last": "Gao", "suffix": "" }, { "first": "Wei", "middle": [], "last": "Zhao", "suffix": "" }, { "first": "Steffen", "middle": [], "last": "Eger", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "1347--1354", "other_ids": { "DOI": [ "10.18653/v1/2020.acl-main.124" ] }, "num": null, "urls": [], "raw_text": "Yang Gao, Wei Zhao, and Steffen Eger. 2020. SU- PERT: Towards new frontiers in unsupervised evalu- ation metrics for multi-document summarization. In Proceedings of the 58th Annual Meeting of the Asso- ciation for Computational Linguistics, pages 1347- 1354, Online. Association for Computational Lin- guistics.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Newsroom: A dataset of 1.3 million summaries with diverse extractive strategies", "authors": [ { "first": "Max", "middle": [], "last": "Grusky", "suffix": "" }, { "first": "Mor", "middle": [], "last": "Naaman", "suffix": "" }, { "first": "Yoav", "middle": [], "last": "Artzi", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "708--719", "other_ids": { "DOI": [ "10.18653/v1/N18-1065" ] }, "num": null, "urls": [], "raw_text": "Max Grusky, Mor Naaman, and Yoav Artzi. 2018. Newsroom: A dataset of 1.3 million summaries with diverse extractive strategies. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Hu- man Language Technologies, Volume 1 (Long Pa- pers), pages 708-719, New Orleans, Louisiana. As- sociation for Computational Linguistics.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "The curious case of neural text degeneration", "authors": [ { "first": "Ari", "middle": [], "last": "Holtzman", "suffix": "" }, { "first": "Jan", "middle": [], "last": "Buys", "suffix": "" }, { "first": "Li", "middle": [], "last": "Du", "suffix": "" }, { "first": "Maxwell", "middle": [], "last": "Forbes", "suffix": "" }, { "first": "Yejin", "middle": [], "last": "Choi", "suffix": "" } ], "year": 2020, "venue": "International Conference on Learning Representations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ari Holtzman, Jan Buys, Li Du, Maxwell Forbes, and Yejin Choi. 2020. The curious case of neural text de- generation. In International Conference on Learn- ing Representations.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Categorical reparameterization with gumbel-softmax", "authors": [ { "first": "Eric", "middle": [], "last": "Jang", "suffix": "" }, { "first": "Shixiang", "middle": [], "last": "Gu", "suffix": "" }, { "first": "Ben", "middle": [], "last": "Poole", "suffix": "" } ], "year": 2017, "venue": "International Conference on Learning Representations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Eric Jang, Shixiang Gu, and Ben Poole. 2017. Cate- gorical reparameterization with gumbel-softmax. In International Conference on Learning Representa- tions.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "BART: Denoising sequence-to-sequence pretraining for natural language generation, translation, and comprehension", "authors": [ { "first": "Mike", "middle": [], "last": "Lewis", "suffix": "" }, { "first": "Yinhan", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Naman", "middle": [], "last": "Goyal ; Abdelrahman Mohamed", "suffix": "" }, { "first": "Omer", "middle": [], "last": "Levy", "suffix": "" }, { "first": "Veselin", "middle": [], "last": "Stoyanov", "suffix": "" }, { "first": "Luke", "middle": [], "last": "Zettlemoyer", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "7871--7880", "other_ids": { "DOI": [ "10.18653/v1/2020.acl-main.703" ] }, "num": null, "urls": [], "raw_text": "Mike Lewis, Yinhan Liu, Naman Goyal, Mar- jan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2020. BART: Denoising sequence-to-sequence pre- training for natural language generation, translation, and comprehension. In Proceedings of the 58th An- nual Meeting of the Association for Computational Linguistics, pages 7871-7880, Online. Association for Computational Linguistics.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Mutual information and diverse decoding improve neural machine translation", "authors": [ { "first": "Jiwei", "middle": [], "last": "Li", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Jurafsky", "suffix": "" } ], "year": 2016, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jiwei Li and Dan Jurafsky. 2016. Mutual information and diverse decoding improve neural machine trans- lation. CoRR, abs/1601.00372.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "A simple, fast diverse decoding algorithm for neural generation", "authors": [ { "first": "Jiwei", "middle": [], "last": "Li", "suffix": "" }, { "first": "Will", "middle": [], "last": "Monroe", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Jurafsky", "suffix": "" } ], "year": 2016, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jiwei Li, Will Monroe, and Dan Jurafsky. 2016. A sim- ple, fast diverse decoding algorithm for neural gen- eration. CoRR, abs/1611.08562.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Deep reinforcement learning with distributional semantic rewards for abstractive summarization", "authors": [ { "first": "Siyao", "middle": [], "last": "Li", "suffix": "" }, { "first": "Deren", "middle": [], "last": "Lei", "suffix": "" }, { "first": "Pengda", "middle": [], "last": "Qin", "suffix": "" }, { "first": "William", "middle": [ "Yang" ], "last": "Wang", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", "volume": "", "issue": "", "pages": "6038--6044", "other_ids": { "DOI": [ "10.18653/v1/D19-1623" ] }, "num": null, "urls": [], "raw_text": "Siyao Li, Deren Lei, Pengda Qin, and William Yang Wang. 2019. Deep reinforcement learning with dis- tributional semantic rewards for abstractive summa- rization. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natu- ral Language Processing (EMNLP-IJCNLP), pages 6038-6044, Hong Kong, China. Association for Computational Linguistics.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "ROUGE: A package for automatic evaluation of summaries", "authors": [ { "first": "Chin-Yew", "middle": [], "last": "Lin", "suffix": "" } ], "year": 2004, "venue": "Text Summarization Branches Out", "volume": "", "issue": "", "pages": "74--81", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chin-Yew Lin. 2004. ROUGE: A package for auto- matic evaluation of summaries. In Text Summariza- tion Branches Out, pages 74-81, Barcelona, Spain. Association for Computational Linguistics.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Text summarization with pretrained encoders", "authors": [ { "first": "Yang", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Mirella", "middle": [], "last": "Lapata", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", "volume": "", "issue": "", "pages": "3730--3740", "other_ids": { "DOI": [ "10.18653/v1/D19-1387" ] }, "num": null, "urls": [], "raw_text": "Yang Liu and Mirella Lapata. 2019. Text summariza- tion with pretrained encoders. In Proceedings of the 2019 Conference on Empirical Methods in Nat- ural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3730-3740, Hong Kong, China. Association for Computational Linguistics.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Don't give me the details, just the summary! topic-aware convolutional neural networks for extreme summarization", "authors": [ { "first": "Shashi", "middle": [], "last": "Narayan", "suffix": "" }, { "first": "Shay", "middle": [ "B" ], "last": "Cohen", "suffix": "" }, { "first": "Mirella", "middle": [], "last": "Lapata", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "1797--1807", "other_ids": { "DOI": [ "10.18653/v1/D18-1206" ] }, "num": null, "urls": [], "raw_text": "Shashi Narayan, Shay B. Cohen, and Mirella Lapata. 2018a. Don't give me the details, just the summary! topic-aware convolutional neural networks for ex- treme summarization. In Proceedings of the 2018 Conference on Empirical Methods in Natural Lan- guage Processing, pages 1797-1807, Brussels, Bel- gium. Association for Computational Linguistics.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Ranking sentences for extractive summarization with reinforcement learning", "authors": [ { "first": "Shashi", "middle": [], "last": "Narayan", "suffix": "" }, { "first": "Shay", "middle": [ "B" ], "last": "Cohen", "suffix": "" }, { "first": "Mirella", "middle": [], "last": "Lapata", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "1747--1759", "other_ids": { "DOI": [ "10.18653/v1/N18-1158" ] }, "num": null, "urls": [], "raw_text": "Shashi Narayan, Shay B. Cohen, and Mirella Lapata. 2018b. Ranking sentences for extractive summariza- tion with reinforcement learning. In Proceedings of the 2018 Conference of the North American Chap- ter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Pa- pers), pages 1747-1759, New Orleans, Louisiana. Association for Computational Linguistics.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Continual bert: Continual learning for adaptive extractive summarization of covid-19 literature", "authors": [ { "first": "Park", "middle": [], "last": "Jong Won", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 2020 NLP-COVID Workshop at the Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jong Won Park. 2020. Continual bert: Continual learn- ing for adaptive extractive summarization of covid- 19 literature. In Proceedings of the 2020 NLP- COVID Workshop at the Conference on Empirical Methods in Natural Language Processing (EMNLP).", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Multireward reinforced summarization with saliency and entailment", "authors": [ { "first": "Ramakanth", "middle": [], "last": "Pasunuru", "suffix": "" }, { "first": "Mohit", "middle": [], "last": "Bansal", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "2", "issue": "", "pages": "646--653", "other_ids": { "DOI": [ "10.18653/v1/N18-2102" ] }, "num": null, "urls": [], "raw_text": "Ramakanth Pasunuru and Mohit Bansal. 2018. Multi- reward reinforced summarization with saliency and entailment. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 646- 653, New Orleans, Louisiana. Association for Com- putational Linguistics.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "A deep reinforced model for abstractive summarization", "authors": [ { "first": "Romain", "middle": [], "last": "Paulus", "suffix": "" }, { "first": "Caiming", "middle": [], "last": "Xiong", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Socher", "suffix": "" } ], "year": 2018, "venue": "International Conference on Learning Representations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Romain Paulus, Caiming Xiong, and Richard Socher. 2018. A deep reinforced model for abstractive sum- marization. In International Conference on Learn- ing Representations.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Exploring the limits of transfer learning with a unified text-totext transformer", "authors": [ { "first": "Colin", "middle": [], "last": "Raffel", "suffix": "" }, { "first": "Noam", "middle": [], "last": "Shazeer", "suffix": "" }, { "first": "Adam", "middle": [], "last": "Roberts", "suffix": "" }, { "first": "Katherine", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Sharan", "middle": [], "last": "Narang", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Matena", "suffix": "" }, { "first": "Yanqi", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "Wei", "middle": [], "last": "Li", "suffix": "" }, { "first": "Peter", "middle": [ "J" ], "last": "Liu", "suffix": "" } ], "year": 2020, "venue": "Journal of Machine Learning Research", "volume": "21", "issue": "140", "pages": "1--67", "other_ids": {}, "num": null, "urls": [], "raw_text": "Colin Raffel, Noam Shazeer, Adam Roberts, Kather- ine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2020. Exploring the limits of transfer learning with a unified text-to- text transformer. Journal of Machine Learning Re- search, 21(140):1-67.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Sequence level training with recurrent neural networks", "authors": [ { "first": "Aurelio", "middle": [], "last": "Marc", "suffix": "" }, { "first": "Sumit", "middle": [], "last": "Ranzato", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Chopra", "suffix": "" }, { "first": "Wojciech", "middle": [], "last": "Auli", "suffix": "" }, { "first": "", "middle": [], "last": "Zaremba", "suffix": "" } ], "year": 2016, "venue": "4th International Conference on Learning Representations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Marc'Aurelio Ranzato, Sumit Chopra, Michael Auli, and Wojciech Zaremba. 2016. Sequence level train- ing with recurrent neural networks. In 4th Inter- national Conference on Learning Representations, ICLR 2016, San Juan, Puerto Rico, May 2-4, 2016, Conference Track Proceedings.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Self-critical sequence training for image captioning", "authors": [ { "first": "S", "middle": [ "J" ], "last": "Rennie", "suffix": "" }, { "first": "E", "middle": [], "last": "Marcheret", "suffix": "" }, { "first": "Y", "middle": [], "last": "Mroueh", "suffix": "" }, { "first": "J", "middle": [], "last": "Ross", "suffix": "" }, { "first": "V", "middle": [], "last": "Goel", "suffix": "" } ], "year": 2017, "venue": "2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)", "volume": "", "issue": "", "pages": "1179--1195", "other_ids": { "DOI": [ "10.1109/CVPR.2017.131" ] }, "num": null, "urls": [], "raw_text": "S. J. Rennie, E. Marcheret, Y. Mroueh, J. Ross, and V. Goel. 2017. Self-critical sequence training for im- age captioning. In 2017 IEEE Conference on Com- puter Vision and Pattern Recognition (CVPR), pages 1179-1195.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Simple statistical gradientfollowing algorithms for connectionist reinforcement learning", "authors": [ { "first": "Ronald", "middle": [ "J" ], "last": "Williams", "suffix": "" } ], "year": 1992, "venue": "Mach. Learn", "volume": "8", "issue": "3-4", "pages": "229--256", "other_ids": { "DOI": [ "10.1007/BF00992696" ] }, "num": null, "urls": [], "raw_text": "Ronald J. Williams. 1992. Simple statistical gradient- following algorithms for connectionist reinforce- ment learning. Mach. Learn., 8(3-4):229-256.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "PEGASUS: Pre-training with extracted gap-sentences for abstractive summarization", "authors": [ { "first": "Jingqing", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Yao", "middle": [], "last": "Zhao", "suffix": "" }, { "first": "Mohammad", "middle": [], "last": "Saleh", "suffix": "" }, { "first": "Peter", "middle": [], "last": "Liu", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 37th International Conference on Machine Learning", "volume": "119", "issue": "", "pages": "11328--11339", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jingqing Zhang, Yao Zhao, Mohammad Saleh, and Pe- ter Liu. 2020a. PEGASUS: Pre-training with ex- tracted gap-sentences for abstractive summarization. In Proceedings of the 37th International Conference on Machine Learning, volume 119 of Proceedings of Machine Learning Research, pages 11328-11339. PMLR.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "Bertscore: Evaluating text generation with bert", "authors": [ { "first": "Tianyi", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Varsha", "middle": [], "last": "Kishore", "suffix": "" }, { "first": "Felix", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Kilian", "middle": [ "Q" ], "last": "Weinberger", "suffix": "" }, { "first": "Yoav", "middle": [], "last": "Artzi", "suffix": "" } ], "year": 2020, "venue": "International Conference on Learning Representations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tianyi Zhang, Varsha Kishore, Felix Wu, Kilian Q. Weinberger, and Yoav Artzi. 2020b. Bertscore: Evaluating text generation with bert. In Interna- tional Conference on Learning Representations.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "Make lead bias in your favor: Zero-shot abstractive news summarization", "authors": [ { "first": "Chenguang", "middle": [], "last": "Zhu", "suffix": "" }, { "first": "Ziyi", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Robert", "middle": [], "last": "Gmyr", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Zeng", "suffix": "" }, { "first": "Xuedong", "middle": [], "last": "Huang", "suffix": "" } ], "year": 2020, "venue": "International Conference on Learning", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chenguang Zhu, Ziyi Yang, Robert Gmyr, Michael Zeng, and Xuedong Huang. 2020. Make lead bias in your favor: Zero-shot abstractive news summa- rization. In International Conference on Learning", "links": null } }, "ref_entries": { "FIGREF0": { "type_str": "figure", "uris": null, "text": "Comparing the uni-, bi-, and tri-gram novelty for the medium sized datasets. These datasets contain generated sequences up to 128 tokens in length. The methods are as follows: NLL (baseline), RwB-Hinge, RISK-2, and RISK-3. The unique average n-gram novelty (n-grams that do not appear in the source text) is shown to increase across the board compared to the standard NLL baseline.", "num": null }, "TABREF1": { "content": "", "text": "Different experiments and the different sampling methods used in each. Here, RISK-2 and RISK-3 denote the number of samples we utilise in the RISK objective function; two and three, respectively.", "type_str": "table", "num": null, "html": null }, "TABREF2": { "content": "
few_shot29.9614.5429.1731.8113.1929.1241.81 18.3233.50
+ RwB-Hinge 28.6913.8327.8231.8313.1529.08 42.47 \u2020 18.82 \u2020 33.94
+ RISK-2 29.27 PEG full_data 29.35 14.14 28.39 31.96 13.22 32.63 15.84 32.19 33.81 14.26 30.8941.52 18.21 33.31
+ RwB-
", "text": "42.57 \u2020 18.71 \u2020 33.96 \u2020 + RISK-3 29.28 14.05 28.31 32.10 \u2020 13.35 \u2020 29.43 \u2020 42.66 \u2020 19.01 \u2020 34.15 \u2020 Hinge 34.39 \u2020 17.58 \u2020 33.71 \u2020 34.10 \u2020 14.52 31.31 \u2020 42.87 \u2020 19.36 34.56 \u2020 + RISK-2 33.55 \u2020 17.01 \u2020 32.91 \u2020 33.97 14.45 31.18 \u2020 42.93 \u2020 19.25 \u2020 34.67 \u2020 + RISK-3 33.75 \u2020 17.03 \u2020 33.04 \u2020 33.97 14.52 31.14 \u2020 42.74 \u2020 19.23 \u2020 34.60 \u2020", "type_str": "table", "num": null, "html": null }, "TABREF3": { "content": "
: Results on short datasets: AESLC, Gigaword, and XSum. Here we compare the limited resource
(PEG few_shot ) and full-data (PEG full_data ) approaches with our different implementations. ( \u2020) means that the dif-
ferences are statistically significant with respect to the baseline with a p-value < 0.05 over a bootstrap hypothesis
test. Best ROUGE-1/2/L scores are bolded.
", "text": "", "type_str": "table", "num": null, "html": null }, "TABREF5": { "content": "
few_shot38.2813.7023.3238.08 11.61 22.87 48.2727.7935.70
+ RwB-+ RISK-240.19 \u202014.61 \u202023.98 \u2020 38.98 \u2020 12.02 \u2020 22.90 48.2128.34 \u202035.97
+ RISK-340.19 \u202014.55 \u202023.95 \u2020 38.68 \u2020 11.88 \u2020 22.81 48.6528.71 \u2020 36.37 \u2020
PEG full_data40.5716.0525.4638.48 13.33 24.12 52.9834.4441.36
+ RwB-Hinge 40.80 25.41 + RISK-2 16.27 40.32 15.85 25.3138.76 13.55 24.11 53.76 \u2020 35.54 \u2020 42.37 \u2020
+ RISK-340.3615.8925.2638.42
", "text": "Hinge 40.11 \u2020 14.45 \u2020 23.88 \u2020 38.85 \u2020 11.90 \u2020 22.88 48.61 \u2020 29.35 \u2020 36.91 \u2020 38.95 \u2020 13.69 \u2020 24.19 54.30 \u2020 36.01 \u2020 42.76 \u2020 13.37 24.12 54.27 \u2020 35.80 \u2020 42.51 \u2020", "type_str": "table", "num": null, "html": null }, "TABREF6": { "content": "
ferences are statistically significant with respect to the baseline with a p-value < 0.05 over a bootstrap hypothesis
test. Best ROUGE-1/2/L scores are bolded.
few-shot (top halves) and full-data results (bottomtermined over the validation set as described in Ap-
halves), where the scores have been averaged overpendix A. The results show that all the fine-tuning
three independently-initialised training runs. Eachmethods have surpassed the NLL baselines for al-
fine-tuning method is employed in a mixed lossmost all datasets. Several of these improvements
framework, as mentioned in (8) in Section 3.3;have also passed a bootstrap test for statistical sig-
the value for the \u03b3 hyperparameter has been de-nificance, which is regarded as a more appropriate
", "text": "Results on long datasets: Pubmed, ArXiv, and Billsum. Here we compare the limited resource (PEG few_shot ) and full-data (PEG full_data ) approaches with our different implementations. ( \u2020) means that the dif-", "type_str": "table", "num": null, "html": null }, "TABREF8": { "content": "
DatasetRwB: No Hinge-Loss RwB: with Hinge-Loss
XSum (short)42.82/19.32/34.4342.97/19.45/34.73
Newsroom (medium)38.97/26.38/35.0038.17/25.37/34.12
Billsum (long)53.04/34.87/42.1454.48/36.49/43.43
", "text": "Scores on the validation set for short, medium, and long datasets to determine the best method for each size class. RISK, on average, appears to work best for short/medium sized datasets (up to 128 tokens), and RwB-Hinge works better for longer datasets (over 128 tokens).", "type_str": "table", "num": null, "html": null }, "TABREF9": { "content": "
Dataset0.10.30.50.70.9
", "text": "1: Validation scores of the baseline PEGASUS model, fine-tuned on a 1000 training examples for 2000 training iterations (few-shot). Best scores are highlighted. AESLC 28.96/13.12/28.49 30.26/14.55/29.49 31.21/15.22/30.26 30.46/14.65/29.70 31.25/15.64/30.42 ArXiv 28.06/7.99/20.70 33.01/10.58/21.24 29.49/9.32/21.12 33.46/10.46/22.55 33.43/10.55/22.26 Billsum 41.61/28.08/34.65 40.37/28.07/34.17 40.16/28.19/34.27 39.56/28.11/34.16 42.64/29.36/35.73 CNN/DM 40.30/18.37/28.33 39.47/17.41/27.79 39.79/18.03/27.91 40.44/17.81/28.12 40.98/18.06/28.09 Gigaword 39.24/16.81/35.65 38.97/17.42/35.94 39.92/17.56/36.45 40.27/17.96/36.91 40.91/18.48/37.42 Newsroom 36.61/25.35/33.15 36.93/25.39/33.25 36.36/24.57/32.68 38.07/26.15/34.23 35.98/23.53/32.12 Pubmed 31.74/10.69/19.50 33.44/11.37/21.35 34.96/12.07/21.62 37.35/13.02/22.14 36.57/12.99/22.47 Reddit-TIFU 19.43/4.45/15.74 24.87/6.56/20.08 25.00/6.19/19.99 25.73/6.85/20.55 26.50/6.90/20.86 XSum 41.19/17.59/32.90 41.28/17.48/32.27 41.79/17.97/32.65 42.30/18.80/34.11 43.43/19.58/34.76", "type_str": "table", "num": null, "html": null } } } }