{ "paper_id": "2020", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T15:36:20.752387Z" }, "title": "CIST@CL-SciSumm 2020, LongSumm 2020: Automatic Scientific Document Summarization", "authors": [ { "first": "Lei", "middle": [], "last": "Li", "suffix": "", "affiliation": {}, "email": "leili@bupt.edu.cn" }, { "first": "Yang", "middle": [], "last": "Xie", "suffix": "", "affiliation": {}, "email": "xieyang@bupt.edu.cn" }, { "first": "Wei", "middle": [], "last": "Liu", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Yinan", "middle": [], "last": "Liu", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Yafei", "middle": [], "last": "Jiang", "suffix": "", "affiliation": {}, "email": "jiangyafei@bupt.edu.cn" }, { "first": "Siya", "middle": [], "last": "Qi", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Xingyuan", "middle": [], "last": "Li", "suffix": "", "affiliation": {}, "email": "lixingyuan@bupt.edu.cn" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Our system participates in two shared tasks, CL-SciSumm 2020 and LongSumm 2020. In the CL-SciSumm shared task, based on our previous work, we apply more machine learning methods on position features and content features for facet classification in Task1B. And GCN is introduced in Task2 to perform extractive summarization. In the LongSumm shared task, we integrate both the extractive and abstractive summarization ways. Three methods were tested which are T5 Fine-tuning, DPPs Sampling, and GRU-GCN/GAT.", "pdf_parse": { "paper_id": "2020", "_pdf_hash": "", "abstract": [ { "text": "Our system participates in two shared tasks, CL-SciSumm 2020 and LongSumm 2020. In the CL-SciSumm shared task, based on our previous work, we apply more machine learning methods on position features and content features for facet classification in Task1B. And GCN is introduced in Task2 to perform extractive summarization. In the LongSumm shared task, we integrate both the extractive and abstractive summarization ways. Three methods were tested which are T5 Fine-tuning, DPPs Sampling, and GRU-GCN/GAT.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "The increasing scientific documents published on the Internet allow researchers to find more and more documents of interest. However, how to quickly and efficiently obtain the most important facts or ideas of a document is a big challenge. Summarization of scientific documents can mitigate this issue by presenting a brief summary of the whole document to researchers. This year, we participate in two shared tasks of SDP 2020 (Chandrasekaran et al., forthcoming). The CL-SciSumm shared task is the first medium-scale shared task on scientific document summarization in the field of Computational Linguistics and aims to generate a structured summary for the RP (Reference Paper) with the utilization of 10 or more CPs (Citing Papers). The LongSumm shared task opted to leverage blogs created by researchers in the NLP (Natural Language Processing) and Machine learning communities and use these summaries as reference summaries to generate the abstractive and extractive summaries for scientific papers.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this paper, we will introduce our methods, experiments, and results of two shared tasks. For the CL-SciSumm shared task, based on our previous work (Li et al., 2019) , we continue to leverage similarity calculation on multiple features to perform citation linkage in Task1A. In Task1B, we first extract position features and content features of RT (Reference Text) and CT (Citation Text), then apply different machine learning methods to classify the facet. In Task2, we apply DPPs (Determinantal Point Processes) and GCN (Graph Convolutional Network) to perform extractive summarization this time. As for the LongSumm shared task, we retain those extractive methods in the Task2 of the CL-SciSumm shared task as the basis for our summarization system. Furthermore, we also introduce the GAT (Graph Attention Network) and apply an abstractive summarization method based on finetuning.", "cite_spans": [ { "start": 151, "end": 168, "text": "(Li et al., 2019)", "ref_id": "BIBREF17" }, { "start": 521, "end": 554, "text": "GCN (Graph Convolutional Network)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The Task1A of CL-SciSumm is a citation linkage task. The most intuitive method is to calculate and compare the similarity between the CTS (Citation Text Spans) and every text span in RP (Reference Paper), and select the RT with the highest similarity as the result. There are many ways to calculate the similarity, not only traditional IDF and Jaccard similarity, but also Levenshtein distance (Yujian and Bo, 2007) . The basic characteristics of words often play an important role in similarity calculating. As the size of the data set continues to grow, neural network language models such as Word2vec (Goldberg and Levy, 2014) and BERT (Devlin et al., 2018) that contain the semantic similarity information in word-level can make a huge improvement. But these word embedding methods will gradually smooth the difference between keywords in the process of calculating, so WMD (Kusner et al., 2015) was proposed to pay attention to the feature mapping between words. In addition to improvement in feature extraction, researchers have also proposed many new algorithms to process features, such as introducing CNN (Kim, 2014) (Dos Santos and Gatti, 2014) into the NLP field to make more complex judgments on feature vectors, or using MatchPyramid (Pang et al., 2016) to process the similarity comparison focusing on the similarity between words.", "cite_spans": [ { "start": 394, "end": 415, "text": "(Yujian and Bo, 2007)", "ref_id": "BIBREF28" }, { "start": 639, "end": 660, "text": "(Devlin et al., 2018)", "ref_id": "BIBREF3" }, { "start": 878, "end": 899, "text": "(Kusner et al., 2015)", "ref_id": "BIBREF12" }, { "start": 1114, "end": 1125, "text": "(Kim, 2014)", "ref_id": "BIBREF9" }, { "start": 1126, "end": 1154, "text": "(Dos Santos and Gatti, 2014)", "ref_id": "BIBREF4" }, { "start": 1247, "end": 1266, "text": "(Pang et al., 2016)", "ref_id": "BIBREF20" } ], "ref_spans": [], "eq_spans": [], "section": "Related work", "sec_num": "2" }, { "text": "Task1B of CL-SciSumm is essentially a classification task. Classification methods are mainly divided into two parts: Rule-based methods and supervised machine learning methods. Traditional supervised machine learning methods like LR (Logistic Regression) (Park, 2013) , Adaboost (Freund and Schapire, 1997) and XGBoost (Chen and Guestrin, 2016) can be easily applied for this task. Besides, the neural networks, such as TextCNN (Kim, 2014) , TextRNN (Liu et al., 2016) , TextRCNN (Lai et al., 2015) , FastText (Joulin et al., 2016) and Char-CNN (Zhang et al., 2015) , can work directly on text, and generate dense vectors for classification.", "cite_spans": [ { "start": 255, "end": 267, "text": "(Park, 2013)", "ref_id": "BIBREF21" }, { "start": 279, "end": 306, "text": "(Freund and Schapire, 1997)", "ref_id": "BIBREF5" }, { "start": 319, "end": 344, "text": "(Chen and Guestrin, 2016)", "ref_id": "BIBREF1" }, { "start": 428, "end": 439, "text": "(Kim, 2014)", "ref_id": "BIBREF9" }, { "start": 450, "end": 468, "text": "(Liu et al., 2016)", "ref_id": "BIBREF18" }, { "start": 480, "end": 498, "text": "(Lai et al., 2015)", "ref_id": "BIBREF13" }, { "start": 510, "end": 531, "text": "(Joulin et al., 2016)", "ref_id": "BIBREF8" }, { "start": 545, "end": 565, "text": "(Zhang et al., 2015)", "ref_id": "BIBREF30" } ], "ref_spans": [], "eq_spans": [], "section": "Related work", "sec_num": "2" }, { "text": "The Task2 of CL-SciSumm and the LongSumm shared task are both summarization task. Recently, the research on automatic summarization tasks has mainly focused on two ways: extractive summarization and abstractive summarization. In the field of extractive summarization, We studied the sampling process used in DPPs (Kulesza and Taskar, 2012) where we calculated the kernel matrix using WMD sentence similarity for further sampling (Li et al., 2018) . Zhong et al. (2019) explored how to make the system generate higher quality summaries. They selected three metrics: network architecture, knowledge transfer, and learning mode, and analyzed the impact of the three metrics on the quality of summary generation through experiments.", "cite_spans": [ { "start": 313, "end": 339, "text": "(Kulesza and Taskar, 2012)", "ref_id": "BIBREF11" }, { "start": 429, "end": 446, "text": "(Li et al., 2018)", "ref_id": "BIBREF16" }, { "start": 449, "end": 468, "text": "Zhong et al. (2019)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Related work", "sec_num": "2" }, { "text": "GCN is a powerful neural network framework processing graph structural data. Defferrard et al. (2016) extended the traditional CNN to non-Euclidean space and introduce local spectral filtering to optimize the propagation process during the training of the standard graph neural network. Kipf and Welling (2017) further studied the application of GCN in semi-supervised classification. GAT (Veli\u010dkovi\u0107 et al., 2017) scored based on its cluster-aware representations, and sentences with high score were chosen as summaries.", "cite_spans": [ { "start": 77, "end": 101, "text": "Defferrard et al. (2016)", "ref_id": "BIBREF2" }, { "start": 389, "end": 414, "text": "(Veli\u010dkovi\u0107 et al., 2017)", "ref_id": "BIBREF26" } ], "ref_spans": [], "eq_spans": [], "section": "Related work", "sec_num": "2" }, { "text": "As for abstractive summarization, Rush et al. (2015) introduced an attention mechanism to the Seq2Seq model, which enables the model to focus on words in specific positions in the original text via the weight matrix when generating abstracts, thus avoiding the problem of losing too much information due to long sentences. Since BERT (Devlin et al., 2018) has achieved great success in the field of NLP, the method of pre-training and fine-tuning has become a new paradigm. Researchers began to explore how to apply pre-trained models to natural language generation. At first, researchers tried to replace the encoder with a pre-trained BERT (Liu and Lapata, 2019) , then more and more pre-training target functions for the Seq2Seq model were explored like masked generation (Song et al., 2019) , denoising (Lewis et al., 2019 ), text-to-text (Raffel et al., 2019a) . Some specially designed tasks for summarization have also been proposed, such as extracting gap-sentences (Zhang et al., 2019) . We use the gap-sentence method in (Zhang et al., 2019) to combine and transform all the data, then utilize the T5 model (Lewis et al., 2019) to fine-tune and generate the summary.", "cite_spans": [ { "start": 34, "end": 52, "text": "Rush et al. (2015)", "ref_id": "BIBREF24" }, { "start": 334, "end": 355, "text": "(Devlin et al., 2018)", "ref_id": "BIBREF3" }, { "start": 642, "end": 664, "text": "(Liu and Lapata, 2019)", "ref_id": "BIBREF19" }, { "start": 775, "end": 794, "text": "(Song et al., 2019)", "ref_id": "BIBREF25" }, { "start": 807, "end": 826, "text": "(Lewis et al., 2019", "ref_id": "BIBREF15" }, { "start": 843, "end": 865, "text": "(Raffel et al., 2019a)", "ref_id": "BIBREF22" }, { "start": 974, "end": 994, "text": "(Zhang et al., 2019)", "ref_id": "BIBREF29" }, { "start": 1031, "end": 1051, "text": "(Zhang et al., 2019)", "ref_id": "BIBREF29" }, { "start": 1117, "end": 1137, "text": "(Lewis et al., 2019)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Related work", "sec_num": "2" }, { "text": "3.1 CL-SciSumm", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Method", "sec_num": "3" }, { "text": "As shown in Figure 1 , the citation linkage task, Task1A of CL-SciSumm, contains two steps: feature extraction and content linkage. In the feature extraction step, we perform similarity calculation based on different feature extraction ways for each RT and every CT (Citation Text) in CTS (Citation Text Spans), where some traditional features will be used, such as IDF similarity and Jaccard simi- Task1B. larity. Additionally sentence context information is used on the basis of these simple features in order to more comprehensively reflect the similarity information of the sentence. Besides, we also use the Lin and Jcn features of WordNet, word-cos, Word vector, and LDA-Jaccard (Li et al., 2019) . LDA-Jaccard performs better than LDA on sparse topics, and it pays more attention to the union set of the same topic that both two sentences have. In the content linkage step, we add all the scores that each CT belonging to the same CTS, then sort all RTs by the final scores, and take the first N results as the final answer of Task1A. We use four multi-feature fusion methods: Voting-1.2, Voting-2.1, Jaccard-Focused, and Jaccard-Cascade based on our last year work (Li et al., 2019) by increasing the training set and adjusting the hyper-parameters.", "cite_spans": [ { "start": 685, "end": 702, "text": "(Li et al., 2019)", "ref_id": "BIBREF17" }, { "start": 1173, "end": 1190, "text": "(Li et al., 2019)", "ref_id": "BIBREF17" } ], "ref_spans": [ { "start": 12, "end": 20, "text": "Figure 1", "ref_id": "FIGREF0" }, { "start": 399, "end": 406, "text": "Task1B.", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Task1A", "sec_num": "3.1.1" }, { "text": "Our system applies multiple machine learning methods on multiple features representing different aspects of CT and RT. Since a scientific paper is well-structured and each section represents a different facet of the document, our first motivation is to leverage the position feature of CT and RT to classify which facet the citation belongs to. As shown in Figure 2 , the position features are the relative positions of CT and RT, the relative positions of the sections that CT and RT belong to, and the section title text. Suppose the section id is sid, the total amount of sections is tsid, the sentence id is ssid, and the total amount of sentences is tssid. Then, the section relative position(SecP os) of CT or RT is sid/tsid, and the sentence relative position(SenP os) of CT or RT is ssid/tssid. Since the section title text(ST T ) of CT or RT also implicates the role it plays in the whole paper, we leverage TF-IDF to select the top 189 words as the keywords where each word occurs at least 3 times in the training set, then convert the section title to a one-hot vector. Then we train LR, XGBoost, and Adaboost on the position features.", "cite_spans": [], "ref_spans": [ { "start": 357, "end": 365, "text": "Figure 2", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Task1B", "sec_num": "3.1.2" }, { "text": "Next, we focus on the aspect of text content since the texts of CT and RT indicate the content in detail. First, the texts of CT and RT are preprocessed, such as extracting text from XML file, stop word removal, and word tokenization. Then they are represented by word embeddings and mapped to a dense vector space by FastText. The architecture of FastText is shown in Figure 3 where", "cite_spans": [], "ref_spans": [ { "start": 369, "end": 377, "text": "Figure 3", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "Task1B", "sec_num": "3.1.2" }, { "text": "x 1 , x 2 , ..., x N \u22121 ,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Task1B", "sec_num": "3.1.2" }, { "text": "x N represent the n-gram features and each feature is the average of word embeddings. The hidden layer is obtained from the average of", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Task1B", "sec_num": "3.1.2" }, { "text": "x 1 , x 2 , ..., x N \u22121 , x N .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Task1B", "sec_num": "3.1.2" }, { "text": "Then the output layer is fully connected to the hidden layer and finally obtain the predicted label by the hierarchical softmax. The reason that we choose FastText as our classifier based on content features is that FastText is relatively lighter than other text classifiers and can avoid overfitting since the training set is small.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Task1B", "sec_num": "3.1.2" }, { "text": "Task2 is a summarization task, and we apply two extractive methods in this paper.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Task2", "sec_num": "3.1.3" }, { "text": "This method assumes that each document is a set of sentences, and the process of extracting the summary is to extract the highest quality subset from the set of sentences. To achieve this extraction process, we first represent the document as a matrix L representing the relationship between sentences and then apply the DPPs sampling algorithm to extract candidate sentences. The matrix L is constructed by the Quality-Diversity (QD) model and Sent2Vec (SV) model.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Extractive summarization based on DPPs:", "sec_num": null }, { "text": "In the Quality-Diversity model, matrix L can be calculated by:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Extractive summarization based on DPPs:", "sec_num": null }, { "text": "L ij = q i Sim ij q j", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Extractive summarization based on DPPs:", "sec_num": null }, { "text": "where q i is the quality of each sentence which can be calculated by the features we selected, such as Sentence Length (SL), Sentence Position (SP) and Sentence Coverage (SC). Sim ij represents the similarity between sentences, which can be imple- mented as", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Extractive summarization based on DPPs:", "sec_num": null }, { "text": "Sim ij = \u03d5 T i \u03d5 j \u2208 [0, 1]", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Extractive summarization based on DPPs:", "sec_num": null }, { "text": "where \u03d5 i is the diversity vector of a single sentence. In the Sent2Vec model, we construct matrix L by L ij = B T i B j where B is the sentence vector obtained from the Sent2Vec model. By constructing matrix L, we can apply the DPPs sampling algorithm to select sentences, the extracted summaries have both high-quality and low-similarity. The details of DPPs can be referred to the work of Kulesza and Taskar (2012) .", "cite_spans": [ { "start": 392, "end": 417, "text": "Kulesza and Taskar (2012)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Extractive summarization based on DPPs:", "sec_num": null }, { "text": "We propose an extractive summarization method based on GCN and GAT ( Figure 5 ). As shown in Figure 4 , we first build a sentence relation graph based on sentence similarity, calculated by cosine similarity. The similarity graph can objectively reflect the association between sentences, including keywords and sentence similarity information. The graphs and low-level sentence representations compressed by GRU are fed into GCN and GAT. Each node in the undirected graph is a sentence, which is connected to another sentence if their similarity is greater than 0.2, and the origin node feature is the last hidden layer of GRU. Graph convolution can leverage the feature information of the node itself and the structure information of the graph. In the L-layer convolution network, H (l) represents the hidden features of the l th layer, parameterized by a weight matrix W (l) . And\u00c3 is symmetrically normalized from the graph adjacency matrix A. After a non-linear function(ReLU), we obtain advanced representations as the final scoring features.", "cite_spans": [ { "start": 785, "end": 788, "text": "(l)", "ref_id": null }, { "start": 874, "end": 877, "text": "(l)", "ref_id": null } ], "ref_spans": [ { "start": 69, "end": 77, "text": "Figure 5", "ref_id": null }, { "start": 93, "end": 102, "text": "Figure 4", "ref_id": "FIGREF3" } ], "eq_spans": [], "section": "Extractive summarization based on GNN:", "sec_num": null }, { "text": "f (H (l) , A) = \u03c3(\u00c3H (l) W (l) ) Figure 5 : Left: Multi-head attention (with 3 heads) computations apply on node 1 and its neighborhood. h 1 is obtained by concatenating or averaging from the aggregated features of each head. Right: The attention mechanism a(W h i , W h j ), and activated by LeakyReLU. Figure 6 : T5 is actually a transformer pretrained on the large corpus. We fine-tuned it for abstractive summarization task.", "cite_spans": [], "ref_spans": [ { "start": 33, "end": 41, "text": "Figure 5", "ref_id": null }, { "start": 304, "end": 312, "text": "Figure 6", "ref_id": null } ], "eq_spans": [], "section": "Extractive summarization based on GNN:", "sec_num": null }, { "text": "In the training period, we select the sentences most similar to the community summary from the RP as the summary sentences. The selected sentences are labeled as 1, while the rest sentences are labeled as 0. Then the model is trained as a binary classifier. Finally, we greedily select the highest-scoring sentences from the sentence set.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Extractive summarization based on GNN:", "sec_num": null }, { "text": "For the LongSumm shared task, we use three methods based on our forementioned summarization methods for Task2 of CL-SciSumm in this paper.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "LongSumm", "sec_num": "3.2" }, { "text": "Although we have divided the dataset into sectionwise samples and obtain more than 30000 sectionsummary pairs, it is still not sufficient to train an abstractive model from scratch. Therefore we use the pre-training and fine-tuning method to deal with this problem. As shown in Figure 6 , T5 (Raffel et al., 2019b ) is a transformer-liked pre-trained model that has great performance when transfer to a summarization task. It treats every NLP task as a text-to-text task and does both unsupervised pre-training and supervised multi-task pre-training on the large corpus.", "cite_spans": [ { "start": 292, "end": 313, "text": "(Raffel et al., 2019b", "ref_id": "BIBREF23" } ], "ref_spans": [ { "start": 278, "end": 286, "text": "Figure 6", "ref_id": null } ], "eq_spans": [], "section": "T5 Fine-tuning", "sec_num": "3.2.1" }, { "text": "This method is based on DPPs sampling, which is similar to the method in Task2 of the CL-SciSumm shared task. We utilize two models to construct matrix L, that is, the Quality-Diversity (QD) model and the Sent2Vec (SV) model. Then DPPs sampling can automatically select the candidate sentences with high quality.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "DPPs Sampling", "sec_num": "3.2.2" }, { "text": "This method contains two parts: an RNN model and a GCN/GAT model. When processing the original text data, we use GRU to compress the sequences. And a similarity graph is constructed for each sentence group as described in 3.1.3, together with the sentence representation as sentence node feature are fed into GCN or GAT. Then we apply the method of supervised training as a reference to the binary classification and select the highestscored sentences according to the training results.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "GRU-GCN/GAT", "sec_num": "3.2.3" }, { "text": "In our previous work (Li et al., 2019) , we have extracted many kinds of features through various methods. In terms of semantic information, the features are word vector, word-cos, and Lin and Jcn in WordNet. Some traditional features are also used such as IDF and Jaccard similarity, considering that with the increase of the number of topics in the LDA model, the topic vector will gradually become sparse. This time, we abandon LDA and LDA-cos features and introduce the LDA-Jaccard similarity, which can improve the discrimination performance of LDA when the topic vector is sparse and focus on the similarity in the same topic. Based on the original fusion method, there are four new fusion methods by increasing the training set and adjusting the hyper-parameters, which are, Voting-1.2, Voting-2.1, Jaccard-Focused, and Jaccard-Cascade.", "cite_spans": [ { "start": 21, "end": 38, "text": "(Li et al., 2019)", "ref_id": "BIBREF17" } ], "ref_spans": [], "eq_spans": [], "section": "Task1A", "sec_num": "4.1.1" }, { "text": "LDA model topic size is set to 600, and the pretraining word vector size is set to 300. In the case of high-dimensional LDA, although the word distribution in the topic becomes very sparse, the performance has been improved. parameter settings of the four multi-feature fusion methods.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Task1A", "sec_num": "4.1.1" }, { "text": "As shown in Table 1 , the performance of Jaccard-Focused is the best among the four methods. At the same time, there is a big gap between the precision and the recall rate. It is because we manually specify that top-N sentences are answers, so the program finds more sentences in general, so the recall rate is higher than the precision rate.", "cite_spans": [], "ref_spans": [ { "start": 12, "end": 19, "text": "Table 1", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Task1A", "sec_num": "4.1.1" }, { "text": "For XGBoost (POS-XGB), we set the learning rate to 0.3, max depth to 1; for Adaboost (POS-ADB), we use the decision tree as the weak learner with max depth 2, learning rate 0.3; for LR (POS-LR), we set the learning rate to 0.3. We also implement a voting method (POS-Vote) based on these base classifiers. As for FastText (CON-CT-FastText and CON-RT-FastText) applied on content features, the CT and RT length are 40 and 50 respectively. The size of word embedding, hidden layer and output layer are 128, 256 and 2 respectively. We use Adam as the optimizer with learning rate 0.0001, and train for 50 iterations. Finally, we combine the classifiers on position features and content features via a voting method (CON-POS-Vote). Both the vote methods mentioned above obey the majority rule.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Task1B", "sec_num": "4.1.2" }, { "text": "Since Task1B is a multi-label classification task and the training set is severely imbalanced, as shown in Table 2 , we randomly sample an equal number of negative samples for each discourse facet, then train five independent classifiers, respectively. When predicting the test set, we select at most top 2 facets with the highest probability. Table 3 shows the results of Task1B. We find that CON-POS-Vote has the best Precision, while POS-XGB performs best on Recall and F1 Score. The performance of FastText based on content features is better than most of machine learning methods based on position features. And CTs contain more information indicating the facet than RT.", "cite_spans": [], "ref_spans": [ { "start": 107, "end": 114, "text": "Table 2", "ref_id": "TABREF3" }, { "start": 344, "end": 351, "text": "Table 3", "ref_id": "TABREF5" } ], "eq_spans": [], "section": "Task1B", "sec_num": "4.1.2" }, { "text": "In DPPs sampling, Sentence Length (SL), Sentence Position (SP), and Sentence Coverage (SC) are selected as features to calculate the quality of sentences, and the summary compression ratio is set to 20%. For the GCN method, we pick the top 50k words sorted by the frequency from the vocabulary of the original text. We select a sentence subset with the largest ROUGE score as the target for extractive summarization. Based on the greedy algorithm, the sentence with the largest ROUGE score is taken out one by one as a positive sample and added to the extractive summary set until the set cannot increase the score. After cleaning the RP, we rank the sentences by the output score, and then the summaries are generated. Table 5 shows the result on the test set.", "cite_spans": [], "ref_spans": [ { "start": 720, "end": 727, "text": "Table 5", "ref_id": "TABREF6" } ], "eq_spans": [], "section": "Task2", "sec_num": "4.1.3" }, { "text": "From Table5 we can see that GCN based methods perform better than DPPs on various metrics of three different gold summaries. It indicates that end to end supervised learning method can extract better feature than human, even the supervised signals are constructed indirectly (we construct extractive summarization training data from human-write summarization dataset). Although DPPs performs well on improving the diversity of summaries, its ability to evaluate the quality of sentence comes from handcrafted feature, which generalize worse.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Task2", "sec_num": "4.1.3" }, { "text": "The training data set is composed of abstractive parts and extractive parts. The abstractive summarization data are from published papers and blogs which contain around 700 articles with an average of 31.7 sentences per summary and an average of 21.6 words per sentence. The extractive data are from Lev et al. (2019) which have 1705 paper-summary pairs. For each paper, it provides a summary with 30 sentences and 990 words on average. The LongSumm shared task is characterized by long input and output with a high compression ratio. So we choose a mix-and-divide method to deal with it:", "cite_spans": [ { "start": 300, "end": 317, "text": "Lev et al. (2019)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Data preprocessing", "sec_num": "4.2.1" }, { "text": "1. To make full use of all data samples, we mix abstractive and extractive data.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data preprocessing", "sec_num": "4.2.1" }, { "text": "2. Transform the full paper level summarization into short document summarization by dividing all article-summary pairs into sectionsummary pairs.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data preprocessing", "sec_num": "4.2.1" }, { "text": "3. Relabel all samples for abstractive models and extractive models.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data preprocessing", "sec_num": "4.2.1" }, { "text": "The first step is easy to understand. The second step is achieved as follows: with PDF parser, we can identify sections in the paper; the highest Jaccard similarity among all pairs between sections sentences and summary sentences is used as section-sentence Jaccard similarity; each summary sentence is allocated to the section which has the highest section-sentence Jaccard similarity with it. Other co-occurrence based metrics like ROUGE (Gidiotis and Tsoumakas, 2020) or BLEU can also be applied but we choose jaccard because of its simplicity(these metrics usually lead to the same allocation). We get 30230 section-summary pairs in total. At last, we build two datasets with different types:", "cite_spans": [ { "start": 440, "end": 470, "text": "(Gidiotis and Tsoumakas, 2020)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Data preprocessing", "sec_num": "4.2.1" }, { "text": "1. For extractive models, sentences in a section that have the highest Jaccard similarity with summary sentences are labeled to be extracted.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data preprocessing", "sec_num": "4.2.1" }, { "text": "2. For abstractive models, there is no need to process abstractive samples. Extractive samples are processed according to Zhang et al. (2019) . For the long section, we use textrank to extract some sentences as a summary and exclude these sentences from the section. This preprocessing trick can prevent the abstractive model from learning to copy input. For a short section, we do not exclude summary sentences from the section. We divide the dataset into train/dev/test for comparing different models in this report. ROUGE evaluation is given on the divided test set and we use all 30230 samples for training when inferring on the blind test set.", "cite_spans": [ { "start": 122, "end": 141, "text": "Zhang et al. (2019)", "ref_id": "BIBREF29" } ], "ref_spans": [], "eq_spans": [], "section": "Data preprocessing", "sec_num": "4.2.1" }, { "text": "The result of the LongSumm shared task is illustrated in Table 6 . For model T5, we use the small version which has about 60 million parameters. All input sections are truncated to a maximum of 1024 words. The model is fine-tuned for 5 epochs on the sectionwise dataset with a learning rate of 1e-4. The batch size is 32 and we use gradient accumulation to achieve it on a single GPU. Then, we attempt different ways to process the original data, expecting to find the proper input for the model.", "cite_spans": [], "ref_spans": [ { "start": 57, "end": 64, "text": "Table 6", "ref_id": "TABREF8" } ], "eq_spans": [], "section": "Result", "sec_num": "4.2.2" }, { "text": "data is transferred to abstractive data.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Construct summary, as mentioned above, all", "sec_num": "1." }, { "text": "2. Original summary, as the name suggests, original data are used as input. Because many sections do not have corresponding summaries, there are fewer samples can be utilized, but some corresponding summaries are relatively longer.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Construct summary, as mentioned above, all", "sec_num": "1." }, { "text": "3. Original+Construct summary, this method merges the original section and the construct section.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Construct summary, as mentioned above, all", "sec_num": "1." }, { "text": "In order to generate the summary as long as possible within the limitation of summary length, we design two plans to process the generated summaries. Plan A simply merges the first sentence of summaries that are generated from different sections. Plan B extracts at most three sentences from each summary, for those with fewer words, we can use all sentences. Also, the merged summary is truncated to 600 words if the word count exceeds the limit. As for DPPs, because the LongSumm task focuses on a long summary, we change the document compression ratio to control the summary length, we set the ratio to 20% and 30%. For the QD method, we select Sentence Length (SL), Sentence Position (SP), and Sentence Coverage (SC) as features and merge them, which can calculate sentence quality.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Construct summary, as mentioned above, all", "sec_num": "1." }, { "text": "As for GRU-GCN/GAT, we divide each paper into sections, since sections are the natural division of paper, and match each section to its gold summaries. For every section, its relation graph is constructed and system summaries are extracted by sentence scores. After we get the section summaries, paper summaries are concatenated by ranking sentences from sections. In our work, GAT has more parameters, thus are more difficult to converge, and the advantage to learn graph structure is weakened since section graphs are rather small, which explains why attention mechanism does not do better than GCN in some way.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Construct summary, as mentioned above, all", "sec_num": "1." }, { "text": "The results on the test set show that extractive summarization model using the GCN method performs the best on long summary task and the performance of T5 and DPPs is slightly worse than GCN. Generally speaking, the ROUGE value of abstractive summaries is lower than that of extractive summaries. But as an abstractive summarization model, T5 can compress more semantic information to generate the summary closer to an artificial summary. As for DPPs, as an unsupervised model, it uses hand-constructed features to rank sentences. The sentence quality obtained by this is not accurate. GNN uses RNN to model sentences, and considers sentence diversity in the learning process of neural network. So the ability to measure sentence quality is weaker than GNN. However, DPPs is able to work well under the situation where the training data is lacked.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Construct summary, as mentioned above, all", "sec_num": "1." }, { "text": "In the CL-SciSumm shared task, Jaccard-Focused performs better than other methods in Task1A. In future work, we will try to use the knowledge graph and GNN for better expression of semantic and structure information. In Task1B, POS-XGB performs the best, which shows that the position fea-tures contributes more than the content features. In the future, more information can be extracted and fused to obtain richer features, or combined with some hand-craft rules to assist the classification.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion and Future Work", "sec_num": "5" }, { "text": "In Task 2, GCN shows great potential to perform the summarization task. We expect the neural network language models to make contributions to obtain more meaningful semantic representation for sentences against statistical features. In the LongSumm shared task, model T5 and extractive summarization model based on GCN perform well on the official data set, and DPPs still has great potential, we expect to provide more features or modify the sampling processes so as to improve the performance of our models. What's more, in this paper we mainly focus on how to extract/generate section-wise summaries with high quality and diversity, but how to pick and combine these summaries is also an interesting work to be done.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion and Future Work", "sec_num": "5" } ], "back_matter": [ { "text": "V1.2 V2.1 JF JC w p w p w p w p Idf similarity 1 12 0.5 5 0.6 16 0.5 16 Idf context similarity 0.8 3 0.5 15 0.4 10 Jaccard similarity 1 5 0.5 6 JS 7 Jaccard context similarity 0.5 8 0.7 16 0.6 16 Word vector 1 8 0.5 7 0.5 26 word-cos 1 10 0.7 7 0.5 26 0.5 10 LDA-Jaccard 1 12 0.4 7 lin 0.5 5 jcn 0.6 11 ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Feature", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Overview and insights from scientific document summarization shared tasks 2020: CL-SciSumm, LaySumm and LongSumm", "authors": [ { "first": "M", "middle": [ "K" ], "last": "Chandrasekaran", "suffix": "" }, { "first": "G", "middle": [], "last": "Feigenblat", "suffix": "" }, { "first": "Hovy", "middle": [ "E" ], "last": "", "suffix": "" }, { "first": "A", "middle": [], "last": "Ravichander", "suffix": "" }, { "first": "M", "middle": [], "last": "Shmueli-Scheuer", "suffix": "" }, { "first": "A", "middle": [], "last": "De Waard", "suffix": "" }, { "first": "", "middle": [], "last": "Forthcoming", "suffix": "" } ], "year": null, "venue": "Proceedings of the First Workshop on Scholarly Document Processing", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "M. K. Chandrasekaran, G. Feigenblat, Hovy. E., A. Ravichander, M. Shmueli-Scheuer, and A De Waard. forthcoming. Overview and in- sights from scientific document summarization shared tasks 2020: CL-SciSumm, LaySumm and LongSumm. In Proceedings of the First Workshop on Scholarly Document Processing (SDP 2020).", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Xgboost: A scalable tree boosting system", "authors": [ { "first": "Tianqi", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Carlos", "middle": [], "last": "Guestrin", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining", "volume": "", "issue": "", "pages": "785--794", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tianqi Chen and Carlos Guestrin. 2016. Xgboost: A scalable tree boosting system. In Proceedings of the 22nd acm sigkdd international conference on knowl- edge discovery and data mining, pages 785-794.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Convolutional neural networks on graphs with fast localized spectral filtering", "authors": [ { "first": "Micha\u00ebl", "middle": [], "last": "Defferrard", "suffix": "" }, { "first": "Xavier", "middle": [], "last": "Bresson", "suffix": "" }, { "first": "Pierre", "middle": [], "last": "Vandergheynst", "suffix": "" } ], "year": 2016, "venue": "Advances in Neural Information Processing Systems", "volume": "29", "issue": "", "pages": "3844--3852", "other_ids": {}, "num": null, "urls": [], "raw_text": "Micha\u00ebl Defferrard, Xavier Bresson, and Pierre Van- dergheynst. 2016. Convolutional neural networks on graphs with fast localized spectral filtering. In D. D. Lee, M. Sugiyama, U. V. Luxburg, I. Guyon, and R. Garnett, editors, Advances in Neural Informa- tion Processing Systems 29, pages 3844-3852. Cur- ran Associates, Inc.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Bert: Pre-training of deep bidirectional transformers for language understanding", "authors": [ { "first": "Jacob", "middle": [], "last": "Devlin", "suffix": "" }, { "first": "Ming-Wei", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Kenton", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Kristina", "middle": [], "last": "Toutanova", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1810.04805" ] }, "num": null, "urls": [], "raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understand- ing. arXiv preprint arXiv:1810.04805.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Deep convolutional neural networks for sentiment analysis of short texts", "authors": [ { "first": "Santos", "middle": [], "last": "Cicero Dos", "suffix": "" }, { "first": "Maira", "middle": [], "last": "Gatti", "suffix": "" } ], "year": 2014, "venue": "Proceedings of COLING 2014, the 25th International Conference on Computational Linguistics: Technical Papers", "volume": "", "issue": "", "pages": "69--78", "other_ids": {}, "num": null, "urls": [], "raw_text": "Cicero Dos Santos and Maira Gatti. 2014. Deep con- volutional neural networks for sentiment analysis of short texts. In Proceedings of COLING 2014, the 25th International Conference on Computational Linguistics: Technical Papers, pages 69-78.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "A decisiontheoretic generalization of on-line learning and an application to boosting", "authors": [ { "first": "Yoav", "middle": [], "last": "Freund", "suffix": "" }, { "first": "Robert", "middle": [ "E" ], "last": "Schapire", "suffix": "" } ], "year": 1997, "venue": "Journal of computer and system sciences", "volume": "55", "issue": "1", "pages": "119--139", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yoav Freund and Robert E Schapire. 1997. A decision- theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences, 55(1):119-139.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "A divide-and-conquer approach to the summarization of academic articles", "authors": [ { "first": "Alexios", "middle": [], "last": "Gidiotis", "suffix": "" }, { "first": "Grigorios", "middle": [], "last": "Tsoumakas", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alexios Gidiotis and Grigorios Tsoumakas. 2020. A divide-and-conquer approach to the summarization of academic articles.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "word2vec explained: deriving mikolov et al.'s negativesampling word-embedding method", "authors": [ { "first": "Yoav", "middle": [], "last": "Goldberg", "suffix": "" }, { "first": "Omer", "middle": [], "last": "Levy", "suffix": "" } ], "year": 2014, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1402.3722" ] }, "num": null, "urls": [], "raw_text": "Yoav Goldberg and Omer Levy. 2014. word2vec explained: deriving mikolov et al.'s negative- sampling word-embedding method. arXiv preprint arXiv:1402.3722.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Bag of tricks for efficient text classification", "authors": [ { "first": "Armand", "middle": [], "last": "Joulin", "suffix": "" }, { "first": "Edouard", "middle": [], "last": "Grave", "suffix": "" }, { "first": "Piotr", "middle": [], "last": "Bojanowski", "suffix": "" }, { "first": "Tomas", "middle": [], "last": "Mikolov", "suffix": "" } ], "year": 2016, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1607.01759" ] }, "num": null, "urls": [], "raw_text": "Armand Joulin, Edouard Grave, Piotr Bojanowski, and Tomas Mikolov. 2016. Bag of tricks for efficient text classification. arXiv preprint arXiv:1607.01759.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Convolutional neural networks for sentence classification", "authors": [ { "first": "Yoon", "middle": [], "last": "Kim", "suffix": "" } ], "year": 2014, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1408.5882" ] }, "num": null, "urls": [], "raw_text": "Yoon Kim. 2014. Convolutional neural net- works for sentence classification. arXiv preprint arXiv:1408.5882.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Semisupervised classification with graph convolutional networks", "authors": [ { "first": "N", "middle": [], "last": "Thomas", "suffix": "" }, { "first": "Max", "middle": [], "last": "Kipf", "suffix": "" }, { "first": "", "middle": [], "last": "Welling", "suffix": "" } ], "year": 2017, "venue": "International Conference on Learning Representations (ICLR", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Thomas N. Kipf and Max Welling. 2017. Semi- supervised classification with graph convolutional networks. In International Conference on Learning Representations (ICLR).", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Determinantal point processes for machine learning", "authors": [ { "first": "Alex", "middle": [], "last": "Kulesza", "suffix": "" }, { "first": "Ben", "middle": [], "last": "Taskar", "suffix": "" } ], "year": 2012, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1207.6083" ] }, "num": null, "urls": [], "raw_text": "Alex Kulesza and Ben Taskar. 2012. Determinantal point processes for machine learning. arXiv preprint arXiv:1207.6083.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "From word embeddings to document distances", "authors": [ { "first": "Matt", "middle": [], "last": "Kusner", "suffix": "" }, { "first": "Y", "middle": [], "last": "Sun", "suffix": "" }, { "first": "N", "middle": [ "I" ], "last": "Kolkin", "suffix": "" }, { "first": "Kilian", "middle": [], "last": "Weinberger", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 32nd International Conference on Machine Learning (ICML 2015)", "volume": "", "issue": "", "pages": "957--966", "other_ids": {}, "num": null, "urls": [], "raw_text": "Matt Kusner, Y. Sun, N.I. Kolkin, and Kilian Wein- berger. 2015. From word embeddings to docu- ment distances. Proceedings of the 32nd Inter- national Conference on Machine Learning (ICML 2015), pages 957-966.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Recurrent convolutional neural networks for text classification", "authors": [ { "first": "Siwei", "middle": [], "last": "Lai", "suffix": "" }, { "first": "Liheng", "middle": [], "last": "Xu", "suffix": "" }, { "first": "Kang", "middle": [], "last": "Liu", "suffix": "" } ], "year": 2015, "venue": "Twenty-ninth AAAI conference on artificial intelligence", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Siwei Lai, Liheng Xu, Kang Liu, and Jun Zhao. 2015. Recurrent convolutional neural networks for text classification. In Twenty-ninth AAAI conference on artificial intelligence.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Talk-Summ: A dataset and scalable annotation method for scientific paper summarization based on conference talks", "authors": [ { "first": "Guy", "middle": [], "last": "Lev", "suffix": "" }, { "first": "Michal", "middle": [], "last": "Shmueli-Scheuer", "suffix": "" }, { "first": "Jonathan", "middle": [], "last": "Herzig", "suffix": "" }, { "first": "Achiya", "middle": [], "last": "Jerbi", "suffix": "" }, { "first": "David", "middle": [], "last": "Konopnicki", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "2125--2131", "other_ids": {}, "num": null, "urls": [], "raw_text": "Guy Lev, Michal Shmueli-Scheuer, Jonathan Herzig, Achiya Jerbi, and David Konopnicki. 2019. Talk- Summ: A dataset and scalable annotation method for scientific paper summarization based on confer- ence talks. In Proceedings of the 57th Annual Meet- ing of the Association for Computational Linguistics, pages 2125-2131, Florence, Italy. Association for Computational Linguistics.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Bart: Denoising sequence-to-sequence pre-training for natural lan-guage generation, translation, and comprehension", "authors": [ { "first": "Mike", "middle": [], "last": "Lewis", "suffix": "" }, { "first": "Yinhan", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Naman", "middle": [], "last": "Goyal", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1910.13461" ] }, "num": null, "urls": [], "raw_text": "Mike Lewis, Yinhan Liu, Naman Goyal, Mar- jan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Ves Stoyanov, and Luke Zet-tlemoyer. 2019. Bart: Denoising sequence-to-sequence pre-training for natural lan-guage generation, translation, and comprehension. arXiv preprint arXiv:1910.13461.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Cist@ clscisumm-18: Methods for computational linguistics scientific citation linkage, facet classification and summarization", "authors": [ { "first": "Lei", "middle": [], "last": "Li", "suffix": "" }, { "first": "Junqi", "middle": [], "last": "Chi", "suffix": "" }, { "first": "Moye", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Zuying", "middle": [], "last": "Huang", "suffix": "" }, { "first": "Yingqi", "middle": [], "last": "Zhu", "suffix": "" }, { "first": "Xiangling", "middle": [], "last": "Fu", "suffix": "" } ], "year": 2018, "venue": "BIRNDL@ SIGIR", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lei Li, Junqi Chi, Moye Chen, Zuying Huang, Yingqi Zhu, and Xiangling Fu. 2018. Cist@ clscisumm-18: Methods for computational linguistics scientific cita- tion linkage, facet classification and summarization. In BIRNDL@ SIGIR.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Cist@ clscisumm-19: Automatic scientific paper summarization with citances and facets", "authors": [ { "first": "Lei", "middle": [], "last": "Li", "suffix": "" }, { "first": "Yingqi", "middle": [], "last": "Zhu", "suffix": "" }, { "first": "Yang", "middle": [], "last": "Xie", "suffix": "" }, { "first": "Zuying", "middle": [], "last": "Huang", "suffix": "" }, { "first": "Wei", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Xingyuan", "middle": [], "last": "Li", "suffix": "" }, { "first": "Yinan", "middle": [], "last": "Liu", "suffix": "" } ], "year": 2019, "venue": "BIRNDL@ SI-GIR", "volume": "", "issue": "", "pages": "196--207", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lei Li, Yingqi Zhu, Yang Xie, Zuying Huang, Wei Liu, Xingyuan Li, and Yinan Liu. 2019. Cist@ clscisumm-19: Automatic scientific paper summa- rization with citances and facets. In BIRNDL@ SI- GIR, pages 196-207.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Recurrent neural network for text classification with multi-task learning", "authors": [ { "first": "Pengfei", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Xipeng", "middle": [], "last": "Qiu", "suffix": "" }, { "first": "Xuanjing", "middle": [], "last": "Huang", "suffix": "" } ], "year": 2016, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1605.05101" ] }, "num": null, "urls": [], "raw_text": "Pengfei Liu, Xipeng Qiu, and Xuanjing Huang. 2016. Recurrent neural network for text classi- fication with multi-task learning. arXiv preprint arXiv:1605.05101.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Text summarization with pretrained encoders", "authors": [ { "first": "Yang", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Mirella", "middle": [], "last": "Lapata", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1908.08345" ] }, "num": null, "urls": [], "raw_text": "Yang Liu and Mirella Lapata. 2019. Text summa- rization with pretrained encoders. arXiv preprint arXiv:1908.08345.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Text matching as image recognition", "authors": [ { "first": "Liang", "middle": [], "last": "Pang", "suffix": "" }, { "first": "Yanyan", "middle": [], "last": "Lan", "suffix": "" }, { "first": "Jiafeng", "middle": [], "last": "Guo", "suffix": "" }, { "first": "Jun", "middle": [], "last": "Xu", "suffix": "" }, { "first": "Shengxian", "middle": [], "last": "Wan", "suffix": "" }, { "first": "Xueqi", "middle": [], "last": "Cheng", "suffix": "" } ], "year": 2016, "venue": "AAAI", "volume": "16", "issue": "", "pages": "2793--2799", "other_ids": {}, "num": null, "urls": [], "raw_text": "Liang Pang, Yanyan Lan, Jiafeng Guo, Jun Xu, Shengx- ian Wan, and Xueqi Cheng. 2016. Text matching as image recognition. In AAAI, volume 16, pages 2793-2799.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "An introduction to logistic regression: From basic concepts to interpretation with particular attention to nursing domain", "authors": [ { "first": "Hyeoun-Ae", "middle": [], "last": "Park", "suffix": "" } ], "year": 2013, "venue": "Journal of Korean Academy of Nursing", "volume": "43", "issue": "", "pages": "154--164", "other_ids": { "DOI": [ "10.4040/jkan.2013.43.2.154" ] }, "num": null, "urls": [], "raw_text": "Hyeoun-Ae Park. 2013. An introduction to logistic re- gression: From basic concepts to interpretation with particular attention to nursing domain. Journal of Korean Academy of Nursing, 43:154-164.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Exploring the limits of transfer learning with a unified text-to-text transformer", "authors": [ { "first": "Colin", "middle": [], "last": "Raffel", "suffix": "" }, { "first": "Noam", "middle": [], "last": "Shazeer", "suffix": "" }, { "first": "Adam", "middle": [], "last": "Roberts", "suffix": "" }, { "first": "Katherine", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Sharan", "middle": [], "last": "Narang", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Matena", "suffix": "" }, { "first": "Yanqi", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "Wei", "middle": [], "last": "Li", "suffix": "" }, { "first": "Peter J", "middle": [], "last": "Liu", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1910.10683" ] }, "num": null, "urls": [], "raw_text": "Colin Raffel, Noam Shazeer, Adam Roberts, Kathe- rine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J Liu. 2019a. Exploring the limits of transfer learning with a unified text-to-text transformer. arXiv preprint arXiv:1910.10683.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Exploring the limits of transfer learning with a unified text-to", "authors": [ { "first": "Colin", "middle": [], "last": "Raffel", "suffix": "" }, { "first": "Noam", "middle": [], "last": "Shazeer", "suffix": "" }, { "first": "Adam", "middle": [], "last": "Roberts", "suffix": "" }, { "first": "Katherine", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Sharan", "middle": [], "last": "Narang", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Matena", "suffix": "" }, { "first": "Yanqi", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "Wei", "middle": [], "last": "Li", "suffix": "" }, { "first": "Peter", "middle": [ "J" ], "last": "Liu", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2019b. Exploring the limits of transfer learning with a unified text-to-text trans- former. arXiv e-prints.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "A neural attention model for abstractive sentence summarization", "authors": [ { "first": "Alexander", "middle": [ "M" ], "last": "Rush", "suffix": "" }, { "first": "Sumit", "middle": [], "last": "Chopra", "suffix": "" }, { "first": "Jason", "middle": [], "last": "Weston", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "379--389", "other_ids": { "DOI": [ "10.18653/v1/D15-1044" ] }, "num": null, "urls": [], "raw_text": "Alexander M. Rush, Sumit Chopra, and Jason Weston. 2015. A neural attention model for abstractive sen- tence summarization. In Proceedings of the 2015 Conference on Empirical Methods in Natural Lan- guage Processing, pages 379-389, Lisbon, Portugal. Association for Computational Linguistics.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Mass: Masked sequence to sequence pre-training for language gener-ation", "authors": [ { "first": "Kaitao", "middle": [], "last": "Song", "suffix": "" }, { "first": "Xu", "middle": [], "last": "Tan", "suffix": "" }, { "first": "Tao", "middle": [], "last": "Qin", "suffix": "" }, { "first": "Jianfeng", "middle": [], "last": "Lu", "suffix": "" }, { "first": "Tie-Yan", "middle": [], "last": "Liu", "suffix": "" } ], "year": 2019, "venue": "International Conference on Machine Learning", "volume": "", "issue": "", "pages": "5926--5936", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kaitao Song, Xu Tan, Tao Qin, Jianfeng Lu, and Tie- Yan Liu. 2019. Mass: Masked sequence to se- quence pre-training for language gener-ation. In In- ternational Conference on Machine Learning, pages 5926-5936.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Graph attention networks", "authors": [ { "first": "Petar", "middle": [], "last": "Veli\u010dkovi\u0107", "suffix": "" }, { "first": "Guillem", "middle": [], "last": "Cucurull", "suffix": "" }, { "first": "Arantxa", "middle": [], "last": "Casanova", "suffix": "" }, { "first": "Adriana", "middle": [], "last": "Romero", "suffix": "" }, { "first": "Pietro", "middle": [], "last": "Lio", "suffix": "" }, { "first": "Yoshua", "middle": [], "last": "Bengio", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1710.10903" ] }, "num": null, "urls": [], "raw_text": "Petar Veli\u010dkovi\u0107, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Lio, and Yoshua Bengio. 2017. Graph attention networks. arXiv preprint arXiv:1710.10903.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Graph-based neural multi-document summarization", "authors": [ { "first": "Michihiro", "middle": [], "last": "Yasunaga", "suffix": "" }, { "first": "Rui", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Kshitijh", "middle": [], "last": "Meelu", "suffix": "" }, { "first": "Ayush", "middle": [], "last": "Pareek", "suffix": "" }, { "first": "Krishnan", "middle": [], "last": "Srinivasan", "suffix": "" }, { "first": "Dragomir", "middle": [], "last": "Radev", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 21st Conference on Computational Natural Language Learning", "volume": "", "issue": "", "pages": "452--462", "other_ids": { "DOI": [ "10.18653/v1/K17-1045" ] }, "num": null, "urls": [], "raw_text": "Michihiro Yasunaga, Rui Zhang, Kshitijh Meelu, Ayush Pareek, Krishnan Srinivasan, and Dragomir Radev. 2017. Graph-based neural multi-document summarization. In Proceedings of the 21st Confer- ence on Computational Natural Language Learning (CoNLL 2017), pages 452-462, Vancouver, Canada. Association for Computational Linguistics.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "A normalized levenshtein distance metric", "authors": [ { "first": "Li", "middle": [], "last": "Yujian", "suffix": "" }, { "first": "Liu", "middle": [], "last": "Bo", "suffix": "" } ], "year": 2007, "venue": "IEEE transactions on pattern analysis and machine intelligence", "volume": "29", "issue": "", "pages": "1091--1095", "other_ids": {}, "num": null, "urls": [], "raw_text": "Li Yujian and Liu Bo. 2007. A normalized levenshtein distance metric. IEEE transactions on pattern anal- ysis and machine intelligence, 29(6):1091-1095.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "Pegasus: Pre-training with extracted gap-sentences for abstractive summarization", "authors": [ { "first": "Jingqing", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Yao", "middle": [], "last": "Zhao", "suffix": "" }, { "first": "Mohammad", "middle": [], "last": "Saleh", "suffix": "" }, { "first": "Peter", "middle": [ "J" ], "last": "Liu", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jingqing Zhang, Yao Zhao, Mohammad Saleh, and Pe- ter J. Liu. 2019. Pegasus: Pre-training with ex- tracted gap-sentences for abstractive summarization.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "Character-level convolutional networks for text classification", "authors": [ { "first": "Xiang", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Junbo", "middle": [], "last": "Zhao", "suffix": "" }, { "first": "Yann", "middle": [], "last": "Lecun", "suffix": "" } ], "year": 2015, "venue": "Advances in neural information processing systems", "volume": "", "issue": "", "pages": "649--657", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xiang Zhang, Junbo Zhao, and Yann LeCun. 2015. Character-level convolutional networks for text clas- sification. In Advances in neural information pro- cessing systems, pages 649-657.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "Xipeng Qiu, and Xuanjing Huang. 2019. Searching for effective neural extractive summarization: What works and what's next", "authors": [ { "first": "Ming", "middle": [], "last": "Zhong", "suffix": "" }, { "first": "Pengfei", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Danqing", "middle": [], "last": "Wang", "suffix": "" } ], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1907.03491" ] }, "num": null, "urls": [], "raw_text": "Ming Zhong, Pengfei Liu, Danqing Wang, Xipeng Qiu, and Xuanjing Huang. 2019. Searching for effective neural extractive summarization: What works and what's next. arXiv preprint arXiv:1907.03491.", "links": null } }, "ref_entries": { "FIGREF0": { "num": null, "text": "The complete process of Task1A.", "uris": null, "type_str": "figure" }, "FIGREF1": { "num": null, "text": "The position feature vector in", "uris": null, "type_str": "figure" }, "FIGREF2": { "num": null, "text": "The architecture of FastText in Task 1B.", "uris": null, "type_str": "figure" }, "FIGREF3": { "num": null, "text": "Extractive summarization based on GCN in Task2.", "uris": null, "type_str": "figure" }, "TABREF1": { "text": "shows the", "content": "
Method Precision Recall F1 Score
V1.20.06930.26580.1100
V2.10.06040.23080.0958
JF0.06980.26500.1105
JC0.06050.23310.0960
", "num": null, "html": null, "type_str": "table" }, "TABREF2": { "text": "Task1A experiment results. V1.2, V2.1, JF, JC are Voting-1.2, Voting-2.1, Jaccard-Focused, Jaccard-Cascade respectively.", "content": "
FacetProportion
Aim Citation0.082
Method Citation0.718
Hypothesis Citation0.024
Result Citation0.138
Implication Citation0.080
Multi-facet0.074
", "num": null, "html": null, "type_str": "table" }, "TABREF3": { "text": "Facet distribution of the training set in Task1B.", "content": "", "num": null, "html": null, "type_str": "table" }, "TABREF5": { "text": "Task1B experiment results.", "content": "
", "num": null, "html": null, "type_str": "table" }, "TABREF6": { "text": "Task2 experiment results. JC means Jaccard-Cascade. JF stands for Jaccard-Focused. V1.2 and V2.1 are Voting-1.2 and Voting-2.1 respectively.", "content": "
", "num": null, "html": null, "type_str": "table" }, "TABREF8": { "text": "LongSumm test set results. ROUGE f and ROUGE r are f1 value and recall of ROUGE results.", "content": "
", "num": null, "html": null, "type_str": "table" } } } }