{ "paper_id": "2020", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T15:35:42.009747Z" }, "title": "Dimsum @LaySumm 20: BART-based Approach for Scientific Document Summarization", "authors": [ { "first": "Tiezheng", "middle": [], "last": "Yu", "suffix": "", "affiliation": { "laboratory": "", "institution": "The Hong Kong University of Science and Technology", "location": { "addrLine": "Clear Water Bay", "settlement": "Hong Kong" } }, "email": "" }, { "first": "Dan", "middle": [], "last": "Su", "suffix": "", "affiliation": { "laboratory": "", "institution": "The Hong Kong University of Science and Technology", "location": { "addrLine": "Clear Water Bay", "settlement": "Hong Kong" } }, "email": "" }, { "first": "Wenliang", "middle": [], "last": "Dai", "suffix": "", "affiliation": { "laboratory": "", "institution": "The Hong Kong University of Science and Technology", "location": { "addrLine": "Clear Water Bay", "settlement": "Hong Kong" } }, "email": "" }, { "first": "Pascale", "middle": [], "last": "Fung", "suffix": "", "affiliation": { "laboratory": "", "institution": "The Hong Kong University of Science and Technology", "location": { "addrLine": "Clear Water Bay", "settlement": "Hong Kong" } }, "email": "pascale@ece.ust.hk" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Lay summarization aims to generate lay summaries of scientific papers automatically. It is an essential task that can increase the relevance of science for all of society. In this paper, we build a lay summary generation system based on the BART model. We leverage sentence labels as extra supervision signals to improve the performance of lay summarization. In the CL-LaySumm 2020 shared task, our model achieves 46.00% Rouge1-F1 score.", "pdf_parse": { "paper_id": "2020", "_pdf_hash": "", "abstract": [ { "text": "Lay summarization aims to generate lay summaries of scientific papers automatically. It is an essential task that can increase the relevance of science for all of society. In this paper, we build a lay summary generation system based on the BART model. We leverage sentence labels as extra supervision signals to improve the performance of lay summarization. In the CL-LaySumm 2020 shared task, our model achieves 46.00% Rouge1-F1 score.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Nowadays, researchers have been increasingly tasked by funders and publishers to outline their research for the public by writing a lay summary. Therefore, it is essential to automatically generate lay summaries to reduce the workload for researchers as well as build a bridge between the public and science. Previous studies have investigated scientific article summarization especially for papers Lev et al., 2019; Yasunaga et al., 2019) . However, less work has been done to generate lay summaries.", "cite_spans": [ { "start": 399, "end": 416, "text": "Lev et al., 2019;", "ref_id": "BIBREF10" }, { "start": 417, "end": 439, "text": "Yasunaga et al., 2019)", "ref_id": "BIBREF24" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Recently, the First Workshop on Scholarly Document Processing (Chandrasekaran, 2020), Lay Summary Task 1 (LaySumm 2020) first proposed the task of Lay Summary Generation. The task aims to generate summaries that are representative of the content, comprehensible and interesting to a lay audience. After checking the dataset that the task provides, we observe that lots of the sentences in lay summaries have corresponding sentences in original papers. Inspiring by this observation, we think that making binary sentence labels for extractive summarization and utilize them as extra supervision signals can help model generate better summaries. Therefore, we conduct BART (Lewis 1 https://ornlcda.github.io/SDProc/index.html et al., 2019) encoder to make sentence representations and train extractive summarization together with abstractive summarization.", "cite_spans": [ { "start": 671, "end": 679, "text": "(Lewis 1", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Experimental results show that leveraging sentence labels can improve the Lay summary generation performance. In the Laysumm 2020 competition, our model achieves 46.00% Rouge1-F1 score. The code will be released on Github 2 .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Text Summarization Text summarization aims to produce a condensed representation of input text that captures the core meaning of the original text. Recently, neural network-based approaches have reached remarkable performance for news articles summarization (See et al., 2017; Liu and Lapata, 2019; . Comparing with news articles, scientific papers are typically longer and contain more complex concepts and technical terms.", "cite_spans": [ { "start": 258, "end": 276, "text": "(See et al., 2017;", "ref_id": "BIBREF20" }, { "start": 277, "end": 298, "text": "Liu and Lapata, 2019;", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Existing approaches for scientific paper summarization include extractive models that perform sentence selection Goharian, 2017, 2018) and hybrid models that select the salient text first and then summarize it (Subramanian et al., 2019) . Besides, built the first model for abstractive summarization of single, longer-form documents (e.g., research papers).", "cite_spans": [ { "start": 113, "end": 134, "text": "Goharian, 2017, 2018)", "ref_id": null }, { "start": 210, "end": 236, "text": "(Subramanian et al., 2019)", "ref_id": "BIBREF21" } ], "ref_spans": [], "eq_spans": [], "section": "Scientific Paper Summarization", "sec_num": null }, { "text": "In order to train neural models for this task, several datasets have been introduced. The arXiv and PubMed datasets were created using open access articles from the corresponding popular repositories. Yasunaga et al. (2019) developed and released the first large-scale manuallyannotated corpus for scientific papers (on computational linguistics).", "cite_spans": [ { "start": 201, "end": 223, "text": "Yasunaga et al. (2019)", "ref_id": "BIBREF24" } ], "ref_spans": [], "eq_spans": [], "section": "Scientific Paper Summarization", "sec_num": null }, { "text": "Large Pre-trained Language Model Large pretrained language models, such as BERT (Devlin et al., 2018) , UniLM (Dong et al., 2019) and BART have shown great performance on a variety of downstream tasks including summarization. For example, BART achieved state-ofthe-art performance on CNN/DM (Hermann et al., 2015) news summarization dataset.", "cite_spans": [ { "start": 80, "end": 101, "text": "(Devlin et al., 2018)", "ref_id": "BIBREF6" }, { "start": 110, "end": 129, "text": "(Dong et al., 2019)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Scientific Paper Summarization", "sec_num": null }, { "text": "We use two datasets for this work, which are the dataset of CL-LaySumm 2020 and ScisummNet (Yasunaga et al., 2019) . In this section, we introduce the details of them and the pre-processing method we used.", "cite_spans": [ { "start": 91, "end": 114, "text": "(Yasunaga et al., 2019)", "ref_id": "BIBREF24" } ], "ref_spans": [], "eq_spans": [], "section": "Datasets", "sec_num": "3" }, { "text": "The CL-LaySumm 2020 Dataset is released by the CL-LaySumm Shared Task that aims to produce lay summaries of scientific texts. A lay summary refers to a textual summary intended for a non-technical audience. There are 572 samples in the dataset for training and each sample contains a full-text paper with a lay summary. To test the summarization model, we need to generate lay summaries for 37 papers within 150 words.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "CL-LaySumm 2020 Dataset", "sec_num": "3.1" }, { "text": "Since the original papers are very long and the task requires us to generate relatively short summaries, it is crucial to extract important parts of papers first before feeding them to large pre-trained models. Given our own experience of how papers are written, we start with the assumption that the Abstract, Introduction and Conclusion are most likely to convey the topic and the contributions of the paper. So, we make different combinations of these three sections as input to our model.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "CL-LaySumm 2020 Dataset", "sec_num": "3.1" }, { "text": "The ScisummNet is the first large-scale, humanannotated Scisumm dataset. The dataset provides 1009 papers with their citation networks as well as their manual summaries. The gold summaries are written by annotators based on the abstract and selected citation sentences that also convey the contributions of papers. We take the abstract and annotators selected citation sentences as our models' input.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "ScisummNet Dataset", "sec_num": "3.2" }, { "text": "As mentioned above, we first represent the document using the sentences in its Abstract, Introduc-tion and Conclusion. Then we use two approaches to pre-process the text.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data Pre-processing", "sec_num": "3.3" }, { "text": "The first pre-processing approach is removing tags and outliers. The original text of the Laysumm dataset has lots of tags such as TITLE, SECTION and PARAGRAPH. We remove all different kinds of tags. Besides, some samples of the Laysumm dataset do not contain an Abstract or Introduction. We regard these samples as outliers and delete them while training the model. The total number of outliers is 23. Then, we truncate all input text to a max length of 1024 tokens due to the carrying capacity of the BART model.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data Pre-processing", "sec_num": "3.3" }, { "text": "We use BART, a denoising autoencoder for pretraining sequence-to-sequence models as our baseline.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Methodology 4.1 Baseline", "sec_num": "4" }, { "text": "BART is based on the standard Transformer model (Vaswani et al., 2017) , which can be regarded as generalizing BERT (due to the bidirectional encoder), GPT (with the left-to-right decoder). It is pre-trained on the same corpus as RoBERTa with two tasks: text infilling and sentence permutation. For text infilling, 30% of tokens in each document are masked and the model is trained to recover them at the output. For the sentence permutation, all sentences are permuted as input and the model is supposed to generate the output sentences with the correct order.", "cite_spans": [ { "start": 48, "end": 70, "text": "(Vaswani et al., 2017)", "ref_id": "BIBREF22" } ], "ref_spans": [], "eq_spans": [], "section": "Methodology 4.1 Baseline", "sec_num": "4" }, { "text": "BART obtains great performance on the summarization task. We use the BART fine-tuned on CNN/DailyMail dataset (Hermann et al., 2015) to initialize our model.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Methodology 4.1 Baseline", "sec_num": "4" }, { "text": "There are two canonical strategies for summarization: extractive summarization, which concatenates sentences into the summary and abstractive summarization, which generate novel sentences for the summary. Inspired by the observation that lots of the sentences in human written lay summaries have corresponding sentences in original papers, we use an unsupervised approach to convert the abstractive summaries to extractive labels and train abstractive summarization together with extractive summarization.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Multi-Label Summarization Model", "sec_num": "4.2" }, { "text": "To make the ground truth sentence-level binary labels for extractive summarization, which we call ORACLE, we use a greedy algorithm introduced by (Nallapati et al., 2017) . The approach is based on the idea that the selected sentences from the input should be the ones that maximize the Rouge score (Lin and Hovy, 2003) with the respect gold summary.", "cite_spans": [ { "start": 146, "end": 170, "text": "(Nallapati et al., 2017)", "ref_id": "BIBREF15" }, { "start": 299, "end": 319, "text": "(Lin and Hovy, 2003)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Multi-Label Summarization Model", "sec_num": "4.2" }, { "text": "The architecture of our model is shown in Figure 1 , which follows the BART model's structure. The input document is fed into the bidirectional encoder, then the contextual embeddings of the i th [CLS] symbol are used as the sentence representations. After a feedforward neural network, these sentence representations produce a binary distribution about whether they belong to the extractive summary. As for the abstractive summary, it is generated by the autoregressive decoder. The overall loss L is calculated by L = w e L e + L a . Here L e and L a refer to the Cross-Entropy loss of extractive and abstractive summary respectively.", "cite_spans": [], "ref_spans": [ { "start": 42, "end": 51, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Multi-Label Summarization Model", "sec_num": "4.2" }, { "text": "Data augmentation has been an effective technique to create new training instances when the training data is not enough, as demonstrated in computer vision as well as for many NLP tasks (Chen et al., 2017; Yang et al., 2019; Yuan et al., 2017) .", "cite_spans": [ { "start": 186, "end": 205, "text": "(Chen et al., 2017;", "ref_id": "BIBREF2" }, { "start": 206, "end": 224, "text": "Yang et al., 2019;", "ref_id": "BIBREF13" }, { "start": 225, "end": 243, "text": "Yuan et al., 2017)", "ref_id": "BIBREF25" } ], "ref_spans": [], "eq_spans": [], "section": "Data Augmentation", "sec_num": "4.3" }, { "text": "Existing data augmentation approaches in NLP tasks can be categorized into retrieval-based methods (Chen et al., 2017; Yang et al., 2019) and generation-based methods (Yuan et al., 2017; Buck et al., 2017) . However, none of these suits our situation, since external sources or auxiliary training data are still required. So we adopted a similar method from (Nema et al., 2017) . A pre-defined vocabulary of 24,822 words was used where each word had been associated with a synonym. Then for each training instance, certain ratios (in our case, 1/9) in each document were randomly selected (except stop words and numerical values) and then replaced with their synonyms found in the vocabulary. If a selected word was not found in the vocabulary, it was added there with the most similar word found based on cosine similarity in the GloVe (Pennington et al., 2014) vocabulary. For each training instance, this process is repeated 9 times to create 9 new documents. But the same summary of the original instance was used in the newly generated instances.", "cite_spans": [ { "start": 99, "end": 118, "text": "(Chen et al., 2017;", "ref_id": "BIBREF2" }, { "start": 119, "end": 137, "text": "Yang et al., 2019)", "ref_id": "BIBREF13" }, { "start": 167, "end": 186, "text": "(Yuan et al., 2017;", "ref_id": "BIBREF25" }, { "start": 187, "end": 205, "text": "Buck et al., 2017)", "ref_id": "BIBREF0" }, { "start": 358, "end": 377, "text": "(Nema et al., 2017)", "ref_id": "BIBREF16" }, { "start": 837, "end": 862, "text": "(Pennington et al., 2014)", "ref_id": "BIBREF17" } ], "ref_spans": [], "eq_spans": [], "section": "Data Augmentation", "sec_num": "4.3" }, { "text": "To make use of the ScisummNet dataset, we conduct a two-stage fine-tuning method. In the first stage, we fine-tune the pre-trained BART model on the ScisummNet dataset. We use the Abstract and annotators selected citation sentences as the input and the gold summary as the output. The model is fine-tuned with 20000 iterations before saved. As for the second stage, we use the same settings as we directly fine-tune on the CL-LaySumm 2020 dataset.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Two-Stage Fine-tuning", "sec_num": "4.4" }, { "text": "up seven different experiments.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Two-Stage Fine-tuning", "sec_num": "4.4" }, { "text": "BART (Abs): We only use the Abstract as the input to the BART model. BART (Abs+Intro): We use the Abstract and the first paragraph of the Introduction as the input to the BART model.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Two-Stage Fine-tuning", "sec_num": "4.4" }, { "text": "BART (Abs+Intro all ): We use the Abstract and the whole Introduction as the input to the BART model.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Two-Stage Fine-tuning", "sec_num": "4.4" }, { "text": "BART (Abs+Intro+Con): We use the Abstract, the first paragraph of the Introduction, and the Conclusion (if the paper has) as the input to the BART model.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Two-Stage Fine-tuning", "sec_num": "4.4" }, { "text": "BART (Data augmentation): We use the same data as BART (Abs+Intro+Con). For each training sample, we create 9 new input documents by synonym data augmentation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Two-Stage Fine-tuning", "sec_num": "4.4" }, { "text": "BART + Two-stage: We use the same data as BART (Abs+Intro+Con) to the BART model. The two-stage fine-tuning method is introduced in Section 4.4", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Two-Stage Fine-tuning", "sec_num": "4.4" }, { "text": "BART + Multi-label: We use the same data as BART (Abs+Intro+Con). In addition, for each sentence in the input, we add [CLS] token at the beginning.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Two-Stage Fine-tuning", "sec_num": "4.4" }, { "text": "As for the hyperparameters, we use a dynamic learning rate, warm up 1000 iterations, and decay afterward. We set the batch size to 1 because of the limitation of GPU memory. The gradient will accumulate every ten iterations and we train all models for 6000 iterations on 1 GPU (GTX 1080 Ti). We save the best model that has the highest Rouge1-F1 score based on the validation set. For the BART model, we use the implementation from the huggingface 3 . We use the BART large model pre-trained on CNN/DailyMail dataset.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Two-Stage Fine-tuning", "sec_num": "4.4" }, { "text": "The results are shown in Table 1 and we analyze them from three aspects. Besides, we also generate a Lay Summary of our paper, which is presented in the appendix A.", "cite_spans": [], "ref_spans": [ { "start": 25, "end": 32, "text": "Table 1", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Result Analysis", "sec_num": "6" }, { "text": "Different inputs to the model. The experiment results of BART (Abs), BART (Abs+Intro), and BART (Abs+Intro+Con) show by adding the Introduction and Conclusion to the input, the models' performance improves consistently. However, comparing with the results from BART (Abs+Intro) and BART (Abs+Intro all ), using the whole Introduction 3 https://github.com/huggingface/transformers rather than the first paragraph of the Introduction decreases the performance on Rouge1 score. We think it is because the CL-LaySumm 2020 task requires to make a relatively short summary, less than 150 words. If the input is too long, it makes the model harder to summarize because longer input contains more noisy data. Since the CL-LaySumm 2020 dataset is also small, the model doesn't have enough samples to learn the task.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Result Analysis", "sec_num": "6" }, { "text": "Two-stage fine-tuning and Data Augmentation. The experimental results show that two-stage finetuning doesn't help to improve the model's performance. After checking the details of ScisummNet, we find the corpus comes from ACL Anthology Network (AAN) , which means all data relates to computational linguistics. In contrast, the CL-LaySumm 2020 dataset use papers from a variety of domains including biology and medicine. The Statistical differences between these two datasets make the model hard to learn prior knowledge that can be utilized in CL-LaySumm 2020 task.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Result Analysis", "sec_num": "6" }, { "text": "As for the Data Augmentation, the model performance also doesn't increase as we expected, which contradicts the results from the original paper (Nema et al., 2017) . However, the same method also fails in (Laskar et al., 2020) , which also adopted a large pre-trained model as a startpoint for fine-tuning. So we think the possible reason might be that large pre-trained models are less robust to noisy input. Our synonyms replacement method is too simple as well as unsupervised. On one hand, it can increase the vocabulary diversity of the training data without changing the semantic meaning a lot, but on the other hand, the quality especially the grammar of the generated instances can not be guaranteed to be correct. Thus, some noise might be introduced and decreases the model performance when we augment the data.", "cite_spans": [ { "start": 144, "end": 163, "text": "(Nema et al., 2017)", "ref_id": "BIBREF16" }, { "start": 205, "end": 226, "text": "(Laskar et al., 2020)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Result Analysis", "sec_num": "6" }, { "text": "Multi-label summarization. Comparing with BART (Abs+Intro+Con) and BART + Multi-label models, we find that with multi labels, the Rouge1-F1 score is better but the Recall score is lower, which means that the precision increase a lot. We think that with the extra supervision of sentence labels, the model can learn a better sentence understanding. As a result, the model is able to extract important content from the input which helps upper the F1 and Precision scores.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Result Analysis", "sec_num": "6" }, { "text": "In this paper, we showcased how different inputs, data augmentation, training strategy, and sentence labels influence the lay summarization task. We introduce a new method to utilize sentence labels as another supervision signal while training BART based model. Experimental results show our models can generate better summaries evaluated by the Rouge1-F1 score.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "7" }, { "text": "https://github.com/TysonYu/Laysumm", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "ExperimentsDuring the training phase, we randomly select 90% of the CL-LaySumm 2020 Dataset for training and 10% for validation. If a data sample doesn't contain an Abstract or Introduction, we don't include it in training or validation. To find the optimal architecture for this task within the models we have, we set", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "In the CL-LaySumm 2020 shared task, our model achieves 46.00% Rouge1-F1 score. In this paper, we build a lay summary generation system based on the BART model. We leverage sentence labels as extra supervision signals to improve the performance of lay summarization. Experimental results show that leveraging sentence labels can improve the Lay summary generation performance. The code will be released on Github.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A.1 The Lay Summary of this Paper", "sec_num": null }, { "text": "The summary above is generated by our own system with Abstract, Introduction and Conclusion from this paper. Although many sentences are copied from the original text, they are well organized and coherent. Besides, the content of the summary also conveys the topic and the contribution of this paper. In conclusion, our system can produce accurate and readable summaries.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A.2 Observation", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Ask the right questions: Active question reformulation with reinforcement learning", "authors": [ { "first": "Christian", "middle": [], "last": "Buck", "suffix": "" }, { "first": "Jannis", "middle": [], "last": "Bulian", "suffix": "" }, { "first": "Massimiliano", "middle": [], "last": "Ciaramita", "suffix": "" }, { "first": "Wojciech", "middle": [], "last": "Gajewski", "suffix": "" }, { "first": "Andrea", "middle": [], "last": "Gesmundo", "suffix": "" }, { "first": "Neil", "middle": [], "last": "Houlsby", "suffix": "" }, { "first": "Wei", "middle": [], "last": "Wang", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1705.07830" ] }, "num": null, "urls": [], "raw_text": "Christian Buck, Jannis Bulian, Massimiliano Cia- ramita, Wojciech Gajewski, Andrea Gesmundo, Neil Houlsby, and Wei Wang. 2017. Ask the right ques- tions: Active question reformulation with reinforce- ment learning. arXiv preprint arXiv:1705.07830.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Overview and insights from scientific document summarization shared tasks 2020: Clscisumm, laysumm and longsumm", "authors": [ { "first": "G", "middle": [], "last": "Feigenblat", "suffix": "" }, { "first": "", "middle": [ "E" ], "last": "Hovy", "suffix": "" }, { "first": "A", "middle": [], "last": "Ravichander", "suffix": "" }, { "first": "M", "middle": [], "last": "Shmueli-Scheuer", "suffix": "" }, { "first": "A", "middle": [], "last": "De Waard", "suffix": "" }, { "first": "M", "middle": [ "K" ], "last": "Chandrasekaran", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the First Workshop on Scholarly Document Processing", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Feigenblat G. Hovy. E. Ravichander A. Shmueli- Scheuer M. De Waard A. Chandrasekaran, M. K. 2020. Overview and insights from scientific document summarization shared tasks 2020: Cl- scisumm, laysumm and longsumm. In In Proceed- ings of the First Workshop on Scholarly Document Processing (SDP 2020).", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Reading wikipedia to answer open-domain questions", "authors": [ { "first": "Danqi", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Adam", "middle": [], "last": "Fisch", "suffix": "" }, { "first": "Jason", "middle": [], "last": "Weston", "suffix": "" }, { "first": "Antoine", "middle": [], "last": "Bordes", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1704.00051" ] }, "num": null, "urls": [], "raw_text": "Danqi Chen, Adam Fisch, Jason Weston, and An- toine Bordes. 2017. Reading wikipedia to an- swer open-domain questions. arXiv preprint arXiv:1704.00051.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "A discourse-aware attention model for abstractive summarization of long documents", "authors": [ { "first": "Arman", "middle": [], "last": "Cohan", "suffix": "" }, { "first": "Franck", "middle": [], "last": "Dernoncourt", "suffix": "" }, { "first": "Soon", "middle": [], "last": "Doo", "suffix": "" }, { "first": "Trung", "middle": [], "last": "Kim", "suffix": "" }, { "first": "Seokhwan", "middle": [], "last": "Bui", "suffix": "" }, { "first": "Walter", "middle": [], "last": "Kim", "suffix": "" }, { "first": "Nazli", "middle": [], "last": "Chang", "suffix": "" }, { "first": "", "middle": [], "last": "Goharian", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1804.05685" ] }, "num": null, "urls": [], "raw_text": "Arman Cohan, Franck Dernoncourt, Doo Soon Kim, Trung Bui, Seokhwan Kim, Walter Chang, and Na- zli Goharian. 2018. A discourse-aware attention model for abstractive summarization of long docu- ments. arXiv preprint arXiv:1804.05685.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Scientific article summarization using citation-context and article's discourse structure", "authors": [ { "first": "Arman", "middle": [], "last": "Cohan", "suffix": "" }, { "first": "Nazli", "middle": [], "last": "Goharian", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1704.06619" ] }, "num": null, "urls": [], "raw_text": "Arman Cohan and Nazli Goharian. 2017. Scien- tific article summarization using citation-context and article's discourse structure. arXiv preprint arXiv:1704.06619.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Scientific document summarization via citation contextualization and scientific discourse", "authors": [ { "first": "Arman", "middle": [], "last": "Cohan", "suffix": "" }, { "first": "Nazli", "middle": [], "last": "Goharian", "suffix": "" } ], "year": 2018, "venue": "International Journal on Digital Libraries", "volume": "19", "issue": "2-3", "pages": "287--303", "other_ids": {}, "num": null, "urls": [], "raw_text": "Arman Cohan and Nazli Goharian. 2018. Scientific document summarization via citation contextualiza- tion and scientific discourse. International Journal on Digital Libraries, 19(2-3):287-303.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Bert: Pre-training of deep bidirectional transformers for language understanding", "authors": [ { "first": "Jacob", "middle": [], "last": "Devlin", "suffix": "" }, { "first": "Ming-Wei", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Kenton", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Kristina", "middle": [], "last": "Toutanova", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1810.04805" ] }, "num": null, "urls": [], "raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understand- ing. arXiv preprint arXiv:1810.04805.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Unified language model pre-training for natural language understanding and generation", "authors": [ { "first": "Li", "middle": [], "last": "Dong", "suffix": "" }, { "first": "Nan", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Wenhui", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Furu", "middle": [], "last": "Wei", "suffix": "" }, { "first": "Xiaodong", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Yu", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Jianfeng", "middle": [], "last": "Gao", "suffix": "" }, { "first": "Ming", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "Hsiao-Wuen", "middle": [], "last": "Hon", "suffix": "" } ], "year": 2019, "venue": "Advances in Neural Information Processing Systems", "volume": "", "issue": "", "pages": "13063--13075", "other_ids": {}, "num": null, "urls": [], "raw_text": "Li Dong, Nan Yang, Wenhui Wang, Furu Wei, Xi- aodong Liu, Yu Wang, Jianfeng Gao, Ming Zhou, and Hsiao-Wuen Hon. 2019. Unified language model pre-training for natural language understand- ing and generation. In Advances in Neural Informa- tion Processing Systems, pages 13063-13075.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Teaching machines to read and comprehend", "authors": [ { "first": "Karl", "middle": [], "last": "Moritz Hermann", "suffix": "" }, { "first": "Tomas", "middle": [], "last": "Kocisky", "suffix": "" }, { "first": "Edward", "middle": [], "last": "Grefenstette", "suffix": "" }, { "first": "Lasse", "middle": [], "last": "Espeholt", "suffix": "" }, { "first": "Will", "middle": [], "last": "Kay", "suffix": "" }, { "first": "Mustafa", "middle": [], "last": "Suleyman", "suffix": "" }, { "first": "Phil", "middle": [], "last": "Blunsom", "suffix": "" } ], "year": 2015, "venue": "Advances in neural information processing systems", "volume": "", "issue": "", "pages": "1693--1701", "other_ids": {}, "num": null, "urls": [], "raw_text": "Karl Moritz Hermann, Tomas Kocisky, Edward Grefen- stette, Lasse Espeholt, Will Kay, Mustafa Suleyman, and Phil Blunsom. 2015. Teaching machines to read and comprehend. In Advances in neural information processing systems, pages 1693-1701.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Query focused abstractive summarization via incorporating query relevance and transfer learning with transformer models", "authors": [ { "first": "Enamul", "middle": [], "last": "Md Tahmid Rahman Laskar", "suffix": "" }, { "first": "Jimmy", "middle": [], "last": "Hoque", "suffix": "" }, { "first": "", "middle": [], "last": "Huang", "suffix": "" } ], "year": 2020, "venue": "Canadian Conference on Artificial Intelligence", "volume": "", "issue": "", "pages": "342--348", "other_ids": {}, "num": null, "urls": [], "raw_text": "Md Tahmid Rahman Laskar, Enamul Hoque, and Jimmy Huang. 2020. Query focused abstrac- tive summarization via incorporating query rele- vance and transfer learning with transformer models. In Canadian Conference on Artificial Intelligence, pages 342-348. Springer.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Talksumm: A dataset and scalable annotation method for scientific paper summarization based on conference talks", "authors": [ { "first": "Guy", "middle": [], "last": "Lev", "suffix": "" }, { "first": "Michal", "middle": [], "last": "Shmueli-Scheuer", "suffix": "" }, { "first": "Jonathan", "middle": [], "last": "Herzig", "suffix": "" }, { "first": "Achiya", "middle": [], "last": "Jerbi", "suffix": "" }, { "first": "David", "middle": [], "last": "Konopnicki", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1906.01351" ] }, "num": null, "urls": [], "raw_text": "Guy Lev, Michal Shmueli-Scheuer, Jonathan Herzig, Achiya Jerbi, and David Konopnicki. 2019. Talk- summ: A dataset and scalable annotation method for scientific paper summarization based on conference talks. arXiv preprint arXiv:1906.01351.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Bart: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension", "authors": [ { "first": "Mike", "middle": [], "last": "Lewis", "suffix": "" }, { "first": "Yinhan", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Naman", "middle": [], "last": "Goyal ; Abdelrahman Mohamed", "suffix": "" }, { "first": "Omer", "middle": [], "last": "Levy", "suffix": "" }, { "first": "Ves", "middle": [], "last": "Stoyanov", "suffix": "" }, { "first": "Luke", "middle": [], "last": "Zettlemoyer", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1910.13461" ] }, "num": null, "urls": [], "raw_text": "Mike Lewis, Yinhan Liu, Naman Goyal, Mar- jan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Ves Stoyanov, and Luke Zettlemoyer. 2019. Bart: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. arXiv preprint arXiv:1910.13461.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Automatic evaluation of summaries using n-gram cooccurrence statistics", "authors": [ { "first": "Chin-Yew", "middle": [], "last": "Lin", "suffix": "" }, { "first": "Eduard", "middle": [], "last": "Hovy", "suffix": "" } ], "year": 2003, "venue": "Proceedings of the 2003 Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "150--157", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chin-Yew Lin and Eduard Hovy. 2003. Auto- matic evaluation of summaries using n-gram co- occurrence statistics. In Proceedings of the 2003 Hu- man Language Technology Conference of the North American Chapter of the Association for Computa- tional Linguistics, pages 150-157.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Text summarization with pretrained encoders", "authors": [ { "first": "Yang", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Mirella", "middle": [], "last": "Lapata", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1908.08345" ] }, "num": null, "urls": [], "raw_text": "Yang Liu and Mirella Lapata. 2019. Text summa- rization with pretrained encoders. arXiv preprint arXiv:1908.08345.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Roberta: A robustly optimized bert pretraining approach", "authors": [ { "first": "Yinhan", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Myle", "middle": [], "last": "Ott", "suffix": "" }, { "first": "Naman", "middle": [], "last": "Goyal", "suffix": "" }, { "first": "Jingfei", "middle": [], "last": "Du", "suffix": "" }, { "first": "Mandar", "middle": [], "last": "Joshi", "suffix": "" }, { "first": "Danqi", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Omer", "middle": [], "last": "Levy", "suffix": "" }, { "first": "Mike", "middle": [], "last": "Lewis", "suffix": "" }, { "first": "Luke", "middle": [], "last": "Zettlemoyer", "suffix": "" }, { "first": "Veselin", "middle": [], "last": "Stoyanov", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1907.11692" ] }, "num": null, "urls": [], "raw_text": "Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Man- dar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining ap- proach. arXiv preprint arXiv:1907.11692.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Summarunner: A recurrent neural network based sequence model for extractive summarization of documents", "authors": [ { "first": "Ramesh", "middle": [], "last": "Nallapati", "suffix": "" }, { "first": "Feifei", "middle": [], "last": "Zhai", "suffix": "" }, { "first": "Bowen", "middle": [], "last": "Zhou", "suffix": "" } ], "year": 2017, "venue": "Thirty-First AAAI Conference on Artificial Intelligence", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ramesh Nallapati, Feifei Zhai, and Bowen Zhou. 2017. Summarunner: A recurrent neural network based se- quence model for extractive summarization of docu- ments. In Thirty-First AAAI Conference on Artificial Intelligence.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Diversity driven attention model for query-based abstractive summarization", "authors": [ { "first": "Preksha", "middle": [], "last": "Nema", "suffix": "" }, { "first": "Mitesh", "middle": [], "last": "Khapra", "suffix": "" }, { "first": "Anirban", "middle": [], "last": "Laha", "suffix": "" }, { "first": "Balaraman", "middle": [], "last": "Ravindran", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1704.08300" ] }, "num": null, "urls": [], "raw_text": "Preksha Nema, Mitesh Khapra, Anirban Laha, and Balaraman Ravindran. 2017. Diversity driven atten- tion model for query-based abstractive summariza- tion. arXiv preprint arXiv:1704.08300.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Glove: Global vectors for word representation", "authors": [ { "first": "Jeffrey", "middle": [], "last": "Pennington", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Socher", "suffix": "" }, { "first": "Christopher D", "middle": [], "last": "Manning", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP)", "volume": "", "issue": "", "pages": "1532--1543", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jeffrey Pennington, Richard Socher, and Christopher D Manning. 2014. Glove: Global vectors for word rep- resentation. In Proceedings of the 2014 conference on empirical methods in natural language process- ing (EMNLP), pages 1532-1543.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Generating extractive summaries of scientific paradigms", "authors": [ { "first": "", "middle": [], "last": "Vahed Qazvinian", "suffix": "" }, { "first": "", "middle": [], "last": "Dragomir R Radev", "suffix": "" }, { "first": "M", "middle": [], "last": "Saif", "suffix": "" }, { "first": "Bonnie", "middle": [], "last": "Mohammad", "suffix": "" }, { "first": "David", "middle": [], "last": "Dorr", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Zajic", "suffix": "" }, { "first": "Taesun", "middle": [], "last": "Whidby", "suffix": "" }, { "first": "", "middle": [], "last": "Moon", "suffix": "" } ], "year": 2013, "venue": "Journal of Artificial Intelligence Research", "volume": "46", "issue": "", "pages": "165--201", "other_ids": {}, "num": null, "urls": [], "raw_text": "Vahed Qazvinian, Dragomir R Radev, Saif M Moham- mad, Bonnie Dorr, David Zajic, Michael Whidby, and Taesun Moon. 2013. Generating extractive sum- maries of scientific paradigms. Journal of Artificial Intelligence Research, 46:165-201.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "The acl anthology network corpus", "authors": [ { "first": "Pradeep", "middle": [], "last": "Dragomir R Radev", "suffix": "" }, { "first": "Vahed", "middle": [], "last": "Muthukrishnan", "suffix": "" }, { "first": "Amjad", "middle": [], "last": "Qazvinian", "suffix": "" }, { "first": "", "middle": [], "last": "Abu-Jbara", "suffix": "" } ], "year": 2013, "venue": "Language Resources and Evaluation", "volume": "47", "issue": "4", "pages": "919--944", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dragomir R Radev, Pradeep Muthukrishnan, Vahed Qazvinian, and Amjad Abu-Jbara. 2013. The acl an- thology network corpus. Language Resources and Evaluation, 47(4):919-944.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Get to the point: Summarization with pointer-generator networks", "authors": [ { "first": "Abigail", "middle": [], "last": "See", "suffix": "" }, { "first": "J", "middle": [], "last": "Peter", "suffix": "" }, { "first": "Christopher D", "middle": [], "last": "Liu", "suffix": "" }, { "first": "", "middle": [], "last": "Manning", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1704.04368" ] }, "num": null, "urls": [], "raw_text": "Abigail See, Peter J Liu, and Christopher D Man- ning. 2017. Get to the point: Summarization with pointer-generator networks. arXiv preprint arXiv:1704.04368.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "On extractive and abstractive neural document summarization with transformer language models", "authors": [ { "first": "Sandeep", "middle": [], "last": "Subramanian", "suffix": "" }, { "first": "Raymond", "middle": [], "last": "Li", "suffix": "" }, { "first": "Jonathan", "middle": [], "last": "Pilault", "suffix": "" }, { "first": "Christopher", "middle": [], "last": "Pal", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1909.03186" ] }, "num": null, "urls": [], "raw_text": "Sandeep Subramanian, Raymond Li, Jonathan Pi- lault, and Christopher Pal. 2019. On extrac- tive and abstractive neural document summarization with transformer language models. arXiv preprint arXiv:1909.03186.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Attention is all you need", "authors": [ { "first": "Ashish", "middle": [], "last": "Vaswani", "suffix": "" }, { "first": "Noam", "middle": [], "last": "Shazeer", "suffix": "" }, { "first": "Niki", "middle": [], "last": "Parmar", "suffix": "" }, { "first": "Jakob", "middle": [], "last": "Uszkoreit", "suffix": "" }, { "first": "Llion", "middle": [], "last": "Jones", "suffix": "" }, { "first": "Aidan", "middle": [ "N" ], "last": "Gomez", "suffix": "" }, { "first": "\u0141ukasz", "middle": [], "last": "Kaiser", "suffix": "" }, { "first": "Illia", "middle": [], "last": "Polosukhin", "suffix": "" } ], "year": 2017, "venue": "Advances in neural information processing systems", "volume": "", "issue": "", "pages": "5998--6008", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, \u0141ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in neural information pro- cessing systems, pages 5998-6008.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Data augmentation for bert fine-tuning in open-domain question answering", "authors": [ { "first": "Wei", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Yuqing", "middle": [], "last": "Xie", "suffix": "" }, { "first": "Luchen", "middle": [], "last": "Tan", "suffix": "" }, { "first": "Kun", "middle": [], "last": "Xiong", "suffix": "" }, { "first": "Ming", "middle": [], "last": "Li", "suffix": "" }, { "first": "Jimmy", "middle": [], "last": "Lin", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1904.06652" ] }, "num": null, "urls": [], "raw_text": "Wei Yang, Yuqing Xie, Luchen Tan, Kun Xiong, Ming Li, and Jimmy Lin. 2019. Data augmentation for bert fine-tuning in open-domain question answering. arXiv preprint arXiv:1904.06652.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Scisummnet: A large annotated corpus and content-impact models for scientific paper summarization with citation networks", "authors": [ { "first": "Michihiro", "middle": [], "last": "Yasunaga", "suffix": "" }, { "first": "Jungo", "middle": [], "last": "Kasai", "suffix": "" }, { "first": "Rui", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Alexander", "middle": [ "R" ], "last": "Fabbri", "suffix": "" }, { "first": "Irene", "middle": [], "last": "Li", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Friedman", "suffix": "" }, { "first": "", "middle": [], "last": "Dragomir R Radev", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the AAAI Conference on Artificial Intelligence", "volume": "33", "issue": "", "pages": "7386--7393", "other_ids": {}, "num": null, "urls": [], "raw_text": "Michihiro Yasunaga, Jungo Kasai, Rui Zhang, Alexan- der R Fabbri, Irene Li, Dan Friedman, and Dragomir R Radev. 2019. Scisummnet: A large an- notated corpus and content-impact models for scien- tific paper summarization with citation networks. In Proceedings of the AAAI Conference on Artificial In- telligence, volume 33, pages 7386-7393.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Machine comprehension by text-to-text neural question generation", "authors": [ { "first": "Xingdi", "middle": [], "last": "Yuan", "suffix": "" }, { "first": "Tong", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Caglar", "middle": [], "last": "Gulcehre", "suffix": "" }, { "first": "Alessandro", "middle": [], "last": "Sordoni", "suffix": "" }, { "first": "Philip", "middle": [], "last": "Bachman", "suffix": "" }, { "first": "Sandeep", "middle": [], "last": "Subramanian", "suffix": "" }, { "first": "Saizheng", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Adam", "middle": [], "last": "Trischler", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1705.02012" ] }, "num": null, "urls": [], "raw_text": "Xingdi Yuan, Tong Wang, Caglar Gulcehre, Alessan- dro Sordoni, Philip Bachman, Sandeep Subrama- nian, Saizheng Zhang, and Adam Trischler. 2017. Machine comprehension by text-to-text neural ques- tion generation. arXiv preprint arXiv:1705.02012.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Pegasus: Pre-training with extracted gap-sentences for abstractive summarization", "authors": [ { "first": "Jingqing", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Yao", "middle": [], "last": "Zhao", "suffix": "" }, { "first": "Mohammad", "middle": [], "last": "Saleh", "suffix": "" }, { "first": "Peter J", "middle": [], "last": "Liu", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1912.08777" ] }, "num": null, "urls": [], "raw_text": "Jingqing Zhang, Yao Zhao, Mohammad Saleh, and Pe- ter J Liu. 2019. Pegasus: Pre-training with extracted gap-sentences for abstractive summarization. arXiv preprint arXiv:1912.08777.", "links": null } }, "ref_entries": { "TABREF0": { "num": null, "text": "Multi-label summarization model. The left part is based on a bidirectional encoder and the right part is an autoregressive decoder.", "type_str": "table", "html": null, "content": "
sentence label prediction
FFNOutput Summary
Bidirectional EncoderAutoregressive Decoder
Input Document[CLS]sentone[CLS]2ndsent[CLS]sentagain
Figure 1: ModelRouge1-F1 Rouge1-Recall Rouge2-F1 Rouge2-Recall RougeL-F1 RougeL-Recall
BART (Abs)0.43500.46970.18070.19680.27220.2934
BART (Abs+Intro)0.45180.49230.19770.21350.28200.3061
BART (Abs+Intro all )0.44430.48160.19910.21420.28250.3040
BART (Abs+Intro+Con)0.45360.51710.20160.22710.28640.3243
BART (Data augmentation)0.44900.48870.19720.21360.28950.3139
BART + Two-stage0.45290.48820.20670.22240.29290.3140
BART + Multi-label0.46000.50130.20700.22230.28760.3104
" }, "TABREF1": { "num": null, "text": "Our results on CL-LaySumm 2020 shared task.", "type_str": "table", "html": null, "content": "" } } } }