{ "paper_id": "2020", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T15:36:12.117450Z" }, "title": "IIITBH-IITP@CL-SciSumm20, CL-LaySumm20, LongSumm20", "authors": [ { "first": "Saichethan", "middle": [ "Miriyala" ], "last": "Reddy", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Naveen", "middle": [], "last": "Saini", "suffix": "", "affiliation": { "laboratory": "", "institution": "Indian Institute of Technology", "location": { "settlement": "Patna" } }, "email": "naveen.pcs16@iitp.ac.in" }, { "first": "Sriparna", "middle": [], "last": "Saha", "suffix": "", "affiliation": { "laboratory": "", "institution": "Indian Institute of Technology", "location": { "settlement": "Patna" } }, "email": "sriparna@iitp.ac.in" }, { "first": "Pushpak", "middle": [], "last": "Bhattacharyya", "suffix": "", "affiliation": { "laboratory": "", "institution": "Indian Institute of Technology", "location": { "settlement": "Patna" } }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "In this paper, we present the IIIT Bhagalpur and IIT Patna team's effort to solve the three shared tasks namely, CL-SciSumm 2020, CL-LaySumm 2020, LongSumm 2020 at SDP 2020. The themes of these tasks are to generate medium-scale, lay and long summaries, respectively, for scientific articles. For the first two tasks, unsupervised systems are developed, while for the third one, we have developed a supervised system. The performances of all the systems are evaluated on the associated datasets with the shared tasks in term of well-known ROUGE metric.", "pdf_parse": { "paper_id": "2020", "_pdf_hash": "", "abstract": [ { "text": "In this paper, we present the IIIT Bhagalpur and IIT Patna team's effort to solve the three shared tasks namely, CL-SciSumm 2020, CL-LaySumm 2020, LongSumm 2020 at SDP 2020. The themes of these tasks are to generate medium-scale, lay and long summaries, respectively, for scientific articles. For the first two tasks, unsupervised systems are developed, while for the third one, we have developed a supervised system. The performances of all the systems are evaluated on the associated datasets with the shared tasks in term of well-known ROUGE metric.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Due to a lot of research going into the computational linguistic (CL) domain as well as in other domains, the rate of publishing scientific articles has been increased and will continue to expand (Nallapati et al., 2017 (Nallapati et al., , 2016 Jaidka et al., 2019) . This makes the researchers challenging to update them with the up-to-date advancements. A survey (review) article may help the researcher to have a gist of the recent advancements. But, writing a survey paper is a very laborious and time-consuming task. This challenge demands summarization of scientific articles (Cohan and Goharian, 2018 ; Conroy and Davis, 2018) by providing their summary in a few words and then prepare the survey article .", "cite_spans": [ { "start": 196, "end": 219, "text": "(Nallapati et al., 2017", "ref_id": "BIBREF17" }, { "start": 220, "end": 245, "text": "(Nallapati et al., , 2016", "ref_id": "BIBREF18" }, { "start": 246, "end": 266, "text": "Jaidka et al., 2019)", "ref_id": "BIBREF11" }, { "start": 583, "end": 608, "text": "(Cohan and Goharian, 2018", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "But sometimes, for niche practitioners, the published and survey articles may be difficult to understand. To make them relevant for the nonpractitioners and to benefit all the researchers, it is indeed a need to outline the contribution of research articles in lay language.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The current paper demonstrates the participation of IIIT Bhagapur and IIT Patna team in three shared tasks namely, CL-SciSumm 2020, LongSumm 2020 and CL-LaySumm 2020, at first workshop on Schol-ary Document Processing 1 , 2020 (Chandrasekaran et al., 2020) . The theme of these tasks is to generate medium-scale, long and Lay summaries, respectively. Here, Lay summary means a textual summary which is intended for non-technical audience. The scientific articles used for the first and third tasks are related to computational linguistic domain. While, for the second task, scientific articles cover distinct domains: archeology, epilepsy, and materials engineering. In current paper, all these tasks are posed as extractive summarization (Saini et al., 2019) problems where a subset of sentences are selected from scientific articles based on their relevance. For CL-LaySumm and CL-SciSumm, we have developed the system based on the maximal marginal relevance (MMR) (Carbinell and Goldstein, 2017) which considers novelty and informativeness of sentences with respect to what is already included in the summary. And, for LongSumm, our system utilizes neural network based approach. More descriptions about these tasks including datasets and methodology used, are provided in the subsequent sections. The performances of the systems are evaluated in terms of ROUGE (1-gram, 2-gram, and L) metrics on the provided dataset.", "cite_spans": [ { "start": 227, "end": 256, "text": "(Chandrasekaran et al., 2020)", "ref_id": "BIBREF5" }, { "start": 739, "end": 759, "text": "(Saini et al., 2019)", "ref_id": "BIBREF20" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "CL-SciSumm 2020 is the sixth Computational Linguistics Scientific Document Summarization Shared Task which aims to generate summaries of scientific articles not exceeding 250 words. The associated dataset for the task is provided with a Reference Paper (RP) (the paper to be summarized) and 10 or more citing Papers (CPs) containing citations to the RP, which are used to summarise RP. It includes two more sub-tasks: (a) Task 1(A)iden-tifying the text-spans in the reference article that mostly reflect the citation contexts (i.e., citances that cite the RP) of the citing articles; (b) Task 1(B)categorizing the identified text-spans into a predefined set of facets. Generation of structured summary for scientific document summmarization using the identified text-spans is covered in Task 2.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "CL-SciSumm 2020", "sec_num": "2" }, { "text": "The dataset associated with CL-SciSumm 2020 shared task, consists of 40 annotated scientific articles and their citations for training. In addition to this, a corpus of 1000 documents released as a part of ScicummNet dataset for scientific document summarization is readily available for training. For testing, a blind test set of 20 articles used for CL-SciSumm 2018 (Jaidka et al., 2019) and 2019 shared tasks, is again used for the current shared task.", "cite_spans": [ { "start": 368, "end": 389, "text": "(Jaidka et al., 2019)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Dataset Description", "sec_num": "2.1" }, { "text": "In this section, we have discussed the system developed for Task 1 and Task 2. The corresponding flowchart is shown in Figure 1 .", "cite_spans": [], "ref_spans": [ { "start": 119, "end": 127, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Methodology", "sec_num": "2.2" }, { "text": "For a given reference paper (RP), in order to identify the reference text-spans using citation context, we have used an unsupervised approach where we have extracted the top 5 sentences by calculating cosine similarity between each citance and sentences of the RP. These 5 sentences are considered as cited/reference text spans. Note that before calculating the similarity, we have converted the text-space into a (numeric) vector-space for which we have utilized different types of sentence embeddings namely, Albert (Beltagy et al., 2019a) , ELMO (Peters et al., 2018) , fastText (Athiwaratkun et al., 2018) , SciBERT (Beltagy et al., 2019a) , Universal Sentence Encoder (Cer et al., 2018) , XLNET (Yang et al., 2019) , which are capable of capturing the semantics of the sentences. Thus, in total, six systems are developed for Task 1(A).", "cite_spans": [ { "start": 518, "end": 541, "text": "(Beltagy et al., 2019a)", "ref_id": "BIBREF1" }, { "start": 544, "end": 570, "text": "ELMO (Peters et al., 2018)", "ref_id": null }, { "start": 582, "end": 609, "text": "(Athiwaratkun et al., 2018)", "ref_id": "BIBREF0" }, { "start": 620, "end": 643, "text": "(Beltagy et al., 2019a)", "ref_id": "BIBREF1" }, { "start": 673, "end": 691, "text": "(Cer et al., 2018)", "ref_id": "BIBREF4" }, { "start": 700, "end": 719, "text": "(Yang et al., 2019)", "ref_id": "BIBREF22" } ], "ref_spans": [], "eq_spans": [], "section": "Task 1(A)", "sec_num": "2.2.1" }, { "text": "For identifying discourse facets (Hypothesis, Implication, Aim, Results and Method) of cited text spans, we have used a voting based method. A supervised multi-class classification model, based on the Gradient Boosting (La Quatra et al., 2019; Li et al., 2008) , is trained in order to assign a facet to each cited text span. Training data statistics are described in Table 1 . In our approach, we have extracted top 5 text spans for each citance in Task 1(A). We have used our trained model to identify facet for each cited text span. Later we have used a voting method to finalize facet for each citance. For generating structured summary of 250 words, we have used the unique sentences extracted in Task 1(A) (i.e., cited text spans) as the candidate set of sentences. This approach is known as citation-based summarization. For this purpose, a diversity-based unsupervised measure namely, maximal marginal relevance (MMR), inspired from (Carbinell and Goldstein, 2017) which is a linear combination of informativeness (with respect to documents consisting of chosen candidate sentences) and novelty of the sentence (with respect to sentences already included in the summary) is utilized. Mathematically, it is expressed as", "cite_spans": [ { "start": 219, "end": 243, "text": "(La Quatra et al., 2019;", "ref_id": "BIBREF13" }, { "start": 244, "end": 260, "text": "Li et al., 2008)", "ref_id": "BIBREF15" }, { "start": 941, "end": 972, "text": "(Carbinell and Goldstein, 2017)", "ref_id": "BIBREF3" } ], "ref_spans": [ { "start": 368, "end": 375, "text": "Table 1", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Task 1(B)", "sec_num": "2.2.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "M M R 1 = \u03bb 1 Sim 1 (Q, D)\u2212(1\u2212\u03bb 1 )Sim 2 (Q, d)", "eq_num": "(1)" } ], "section": "Section", "sec_num": null }, { "text": "where, Q is the current sentence, D is the list of extracted sentences in Task 1(A), d is the generated summary till that point of time, Sim 1 is the similarity of a sentence with respect to all other sentences in the document, Sim 2 is the similarity of current sentence with the sentences that are already included in the summary. Note that for representation of sentences into vector form, we have used CountVetorizer 2 which counts the term-frequency of each term in the article.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Section", "sec_num": null }, { "text": "The authors of the paper (Jaidka et al., 2017) which was on summarizing scientific articles mentioned that system performance using ROUGE mea- sure is not always lenient than sentence overlap F1 scores. They demonstrated how the ROUGE score is biased to prefer shorter sentences over longer ones. Motivated by this, we have proposed a variant of MMR by incorporating length of the sentence and is expressed as", "cite_spans": [ { "start": 25, "end": 46, "text": "(Jaidka et al., 2017)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Section", "sec_num": null }, { "text": "M M R 2 = M M R 1 \u2212 \u03bb 2 L (2)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Section", "sec_num": null }, { "text": "where, L is the length of the current sentence. Total twelve systems are developed using the citation-based approach in 6 different semantic spaces (refer to Section 2.2.1), each utilizing M M R 1 and M M R 2 for summary generation. To show the potentiality of citation-based summarization, we have also developed full-text based summarization where we have considered total sentences available in the scientific article as the candidate set of sentences for summary generation and utilized the M M R 2 for summary generation. Thus, in total, 13 systems are submitted in the CL-SciSumm 2020 shared task.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Section", "sec_num": null }, { "text": "We have submitted a total of 13 system runs out of which 6 runs are for both Task 1(A) and Task 1(B) utilizing different semantic space. Rest of the 7 runs are only for the Task 2. Results obtained by our different runs for Task 1(A) and Task 1(B) are illustrated in Table 2 and Table 3 , respectively. For Task 2, we have generated a single summary for each reference paper using M M R 1 and M M R 2 for different embeddings. Out of 13 system runs, 12 are citation-based, and remaining one is full text based. Results obtained are illustrated in Table 4 . For task 1A, & 1B, the best results are obtained using universal sentence encoder for sentence embedding. For Task 2, our enhanced diversity based sentence selection approach, i.e., M M R 2 , has performed better than existing maximum marginal relevance model (M M R 1 ). It is important to note that M M R 2 is tested with different embedding space; but all gives the similar results. Therefore, in Table 4 , we have mentioned only M M R 2 as representive of those runs. From Table 4 , we can also infer that citation based summarization has better sentence overlaps compared to full text based summarization (last row of Table 4 ). Note: Using M M R 2 we have obtained exact same results irrespective of the embeddings used. So in Table 4 , we have added a single entry M M R 2 representative of those 6 systems.", "cite_spans": [], "ref_spans": [ { "start": 267, "end": 286, "text": "Table 2 and Table 3", "ref_id": "TABREF2" }, { "start": 547, "end": 554, "text": "Table 4", "ref_id": "TABREF4" }, { "start": 957, "end": 965, "text": "Table 4", "ref_id": "TABREF4" }, { "start": 1035, "end": 1042, "text": "Table 4", "ref_id": "TABREF4" }, { "start": 1181, "end": 1188, "text": "Table 4", "ref_id": "TABREF4" }, { "start": 1291, "end": 1298, "text": "Table 4", "ref_id": "TABREF4" } ], "eq_spans": [], "section": "Discussion of Results", "sec_num": "2.3" }, { "text": "Poor performance of our system for abstract summary can be explained since our approach tries to focus more on coverage, and diversity. Abstract of any scientific article lies in the starting part and since we are not considering position in our pro- posed approach thus lesser sentence overlaps.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion of Results", "sec_num": "2.3" }, { "text": "The CL-LaySumm 2020, which is the first shared task 3 for Lay summary generation, is for automatic generation of Lay summary in 70-100 words which is readable and easily understandable by the general pubic. In other words, given a full-text paper and its abstract, the task is to generate a Lay Summary of the specified length of that paper.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "CL-LaySumm 2020", "sec_num": "3" }, { "text": "The dataset is provided with 600 scientific articles with its, abstract, full-text and corresponding lay summary (gold summary) of around 70-100 words. The test data consists of 37 articles (out of 600 articles). Test data statistics in terms of number of words are shown in Figure 2 .", "cite_spans": [], "ref_spans": [ { "start": 275, "end": 283, "text": "Figure 2", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Description of Dataset", "sec_num": "3.1" }, { "text": "In this section, we have discussed the methodology used for Lay summary generation. Similar to CL-SciSumm, we have considered this problem as a sentence selection problem where relevant sentences are selected from the document to generate the summary.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Methodology", "sec_num": "3.2" }, { "text": "Similar to Cl-SciSumm, here also, we have used both variants of maximum marginal relevance (MMR) mentioned in Eq 1 and Eq 2 for generating summary. As abstract (Let us call it as ABS) conveys the outline of the paper; therefore, we have compared the summary generated using different variants of MMR with the ABS. Other comparisons are done with the original Lay summary when using (a) the full-text of the article; (b) abstract (ABS) and conclusion (CON) of the paper.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Methodology", "sec_num": "3.2" }, { "text": "Note that goal of generating lay summary is to create a human readable summary for nontechnical audience. To avoid scientific jargon in the generated summary, we have proposed a three step process (let us call it as CWR: Complex Word Removal) where firstly we identify complex words from given sentence, then generate similar words of identified complex words, replace with most suitable word from the generated list. In this paper, we have only identified complex words and removed them, pseudo code for identifying complex words is given in Algorithm 1.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Methodology", "sec_num": "3.2" }, { "text": "Result: List of Complex Words set W = set of unique words from generated summary; set KB = list of words in glove or wordnet; set len = len(W); initialize CWR = []; /*list of complex words*/; for i \u2190 0 to len do word = W[i]; cleanWord = clean(word); /*remove unwanted symbols*/; lemWord = lemmatisation(cleanWord); if lemWord not in KB then CWR.append(word) end end", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Algorithm 1: CWR", "sec_num": null }, { "text": "Results obtained using M M R 1 and, M M R 2 on ABS, FULL-TEXT and ABS+CON, are reported in Table 5 . From this Table, it can be observed that by considering length in to the M M R 1 , i.e., Eq. (2) and generating summary using the abstract of the article helps in improving the performance of the system in comparison to M M R 1 . We have also illustrated how ROUGE-1 F score varied with \u03bb 1 and, \u03bb 2 in Table 6 . Note that these parameters play important roles in generating the informative and novel Lay summary and are the parts of Eqs. ues. Here, \u03bb 1 represents the diversity factor as we increase \u03bb 1 , diversity of generated summary decreases. Reader may have in mind that we are using high value of \u03bb 1 , i.e., 0.75 and thus, the summary may have less coverage. Since average number of words in ABS are around 250 ( Figure 2 ) and our task is to find lay summary around 100 words; therefore, we have used higher \u03bb 1 . Whereas \u03bb 2 tries to maximize Rouge score. As in Table 5 , abstract is shown to be good for the summary generation; therefore, we have executed the Algorithm 1 using the same (i.e., ABS). The results attained by CWR using different variants of MMR are shown in Table 7 . After observing the results, it is clear that there is not much difference among-st the best results of Table 5 and 7, or in other words, the results of Table 7 are quite less than those reported in Table 5 . Error analysis of CWR: Some of the common issues associated with identifying complex words are finding lemma. Lemmatization usually refers to performing things properly with the use of a vocabulary and morphological analysis of words, normally aiming to remove inflectional endings only and to return the base or dictionary form of a word, which is known as the lemma. Few common scientific terms which are not present in lexical databases like wordnet (Miller, 1995) can be important and trivial in the context of the paper. For example words like \"hepatocellular\", \"carcinoma\" Table 7 : Results attained by applying CWR on the generated summary using abstract (ABS). Here, R in second row stands for 'ROUGE'. etc., are trivial for paper (S016882782030009X) but are not present in wordnet vocabulary. Therefore, our result using CWR (Table 7) underperform that by M M R 2 (Table 5) , thus demanding more sophisticated model.", "cite_spans": [ { "start": 1858, "end": 1872, "text": "(Miller, 1995)", "ref_id": "BIBREF16" } ], "ref_spans": [ { "start": 91, "end": 98, "text": "Table 5", "ref_id": null }, { "start": 404, "end": 411, "text": "Table 6", "ref_id": "TABREF5" }, { "start": 823, "end": 831, "text": "Figure 2", "ref_id": "FIGREF1" }, { "start": 974, "end": 981, "text": "Table 5", "ref_id": null }, { "start": 1186, "end": 1193, "text": "Table 7", "ref_id": null }, { "start": 1300, "end": 1307, "text": "Table 5", "ref_id": null }, { "start": 1349, "end": 1356, "text": "Table 7", "ref_id": null }, { "start": 1395, "end": 1402, "text": "Table 5", "ref_id": null }, { "start": 1984, "end": 1991, "text": "Table 7", "ref_id": null }, { "start": 2239, "end": 2248, "text": "(Table 7)", "ref_id": null }, { "start": 2278, "end": 2287, "text": "(Table 5)", "ref_id": null } ], "eq_spans": [], "section": "Discussion of Results", "sec_num": "3.3" }, { "text": "Variant ABSTRACT COMMUNITY HUMAN R-2 R-SU4 R-2 R-SU4 R-2 R-SU4 ALBERT + M M R", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion of Results", "sec_num": "3.3" }, { "text": "Variant Data F1 Scores R-1 R-2 R-L CW R + M M R 1 ABS 0.3986 0.1586 0.2187 CW R + M M R 2 ABS 0.4033 0.1614 0.2209", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion of Results", "sec_num": "3.3" }, { "text": "Most of the existing works on scientific document summarization focus on generating a summary of shorter length (maximum up to 250 words). Such type of length constraint can be sufficient when summarizing news articles, but for scientific articles, the summary requires expertise in the scientific domain to understand it. LongSumm 2020 shared task addresses this issue by generating longer summaries (up to 600 words) of scientific articles.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "LongSumm 2020", "sec_num": "4" }, { "text": "The training corpus for this task includes 1705 extractive summaries, and 531 abstractive summaries of NLP/ML scientific papers. The extractive summaries are based on video talks from associated conferences (Lev et al., 2019) , while the abstractive summaries are from blog posts created by NLP and ML researchers. The test set consists of 22 research papers for both extactive and abstractive summarization and task is to generate a summary of 600 words. In the current paper, we have focused only on the extractive summarization of LongSumm.", "cite_spans": [ { "start": 207, "end": 225, "text": "(Lev et al., 2019)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Dataset Description", "sec_num": "4.1" }, { "text": "To solve the LongSumm in an extactive way, we have utilized the neural network based approach, i.e., convolution neural network (Kim, 2014) . The sentences which are part of the summary are assigned 1 and remaining sentences are assigned 0.", "cite_spans": [ { "start": 128, "end": 139, "text": "(Kim, 2014)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Methodology", "sec_num": "4.2" }, { "text": "In other words, we have posed this task as a binary classification problem where task is to identify whether the given sentence can be a part of the summary or not. Positional embedding is also used along with sentence embedding. The detailed methodology used in our CNN is described below:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Methodology", "sec_num": "4.2" }, { "text": "Authors of (Kim, 2014) showed that CNN with one layer of convolution performs remarkably well for sentence classification tasks. Therefore, we have used one dimensional CNN for extracting features from sentences as described below mathematically:", "cite_spans": [ { "start": 11, "end": 22, "text": "(Kim, 2014)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Convolution:", "sec_num": "1." }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "c i = g(W T f\u1e8a i:i+m\u22121 + b) (3) c = [c 1 , c 2 , c 3 , ...., c n\u2212m+1 ]", "eq_num": "(4)" } ], "section": "Convolution:", "sec_num": "1." }, { "text": "where b is the bias term, g is a non-linear activation function, W f , m and X are convolution filter, window size and concatenation vector, respectively.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Convolution:", "sec_num": "1." }, { "text": "Pooling is a down sampling operation. In max pooling, each pooling operation selects the maximum value of the current view and thus reduces the size yet preserves features as shown below: ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "MaxPooling:", "sec_num": "2." }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "h l = max(c)", "eq_num": "(5)" } ], "section": "MaxPooling:", "sec_num": "2." }, { "text": "h = [h 1 , h 2 ..., h k ] T (6)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "MaxPooling:", "sec_num": "2." }, { "text": "where h is hidden representation of sentence after convolution.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "MaxPooling:", "sec_num": "2." }, { "text": "In any document, regardless of the domain, more relevant sentences can be found in some sections of the document like the leading paragraph of the document (Saini et al., 2019) . Particularly scientific articles are structured in a way that sentences at start (abstract) are more informative as represented below.", "cite_spans": [ { "start": 156, "end": 176, "text": "(Saini et al., 2019)", "ref_id": "BIBREF20" } ], "ref_spans": [], "eq_spans": [], "section": "Positional Embedding:", "sec_num": "3." }, { "text": "p i = 1 1 + i (7)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Positional Embedding:", "sec_num": "3." }, { "text": "where p i is the i th (0 \u2264 i < N ) sentence in the article. Higher the score for a sentence, more informative it is. Therefore, positional embedding is also utilized in our CNN framework.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Positional Embedding:", "sec_num": "3." }, { "text": "After the max-pooling layer, we obtain the penultimate layer h (Eq. 6), which is the vector representation of the input sentence obtained from CNN. We have also fed sentence position encoding (h p ) as additional feature.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Flattening:", "sec_num": "4." }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "h * = [h, h p ]", "eq_num": "(8)" } ], "section": "Flattening:", "sec_num": "4." }, { "text": "where h * is the semantic representation obtained from CNN and h p is position encoding represented as", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Flattening:", "sec_num": "4." }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "h p = [p 1 , p 2 ..., p k ] T", "eq_num": "(9)" } ], "section": "Flattening:", "sec_num": "4." }, { "text": "To avoid overfitting, we have used regularization as mentioned in Eq 10.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Flattening:", "sec_num": "4." }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "y = \u03c3(w r (h * \u2297 r) + b r )", "eq_num": "(10)" } ], "section": "Flattening:", "sec_num": "4." }, { "text": "And finally, we have used sigmoid function as per Eq 11 for obtaining probability scores:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Flattening:", "sec_num": "4." }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "\u03c3(\u0177) = 1 1 + e \u2212\u0177", "eq_num": "(11)" } ], "section": "Flattening:", "sec_num": "4." }, { "text": "Note that we have considered sigmoid probability for assigning ranks to sentences and used those for sentence selection to be included in the summary till the length constraint is satisfied.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Flattening:", "sec_num": "4." }, { "text": "For our experimentation, we have used SciBert (Beltagy et al., 2019b) to get the sentence embeddings as it is trained on a large multi-domain corpus of scientific publications to improve performance on many scientific NLP tasks like summarization (Gabriel et al., 2019) and relation extraction (Sung et al., 2019) . For convolution layer, we have used 600 filters, and 3 kernels with ReLU as our activation function. For Pooling, we have used pool size of 2. We train the model for 10 epochs with the Adadelta optimizer.", "cite_spans": [ { "start": 46, "end": 69, "text": "(Beltagy et al., 2019b)", "ref_id": "BIBREF2" }, { "start": 247, "end": 269, "text": "(Gabriel et al., 2019)", "ref_id": "BIBREF9" }, { "start": 294, "end": 313, "text": "(Sung et al., 2019)", "ref_id": "BIBREF21" } ], "ref_spans": [], "eq_spans": [], "section": "Experimental setup", "sec_num": "4.3" }, { "text": "We have submitted 4 systems for LongSumm shared task. Out of 4, two systems are based on CNN architecture. The key difference between two neural models is essentially the limit of the number of words for summary generation, i.e., the first system (CN N 1 ) uses a strict 600 words and the second system (CN N 2 ) maintains on an average of 600 words for generating summaries. For other two systems, we have used M M R 1 and M M R 2 using same hyper parameters as LaySumm (Section 3.3). The results obtained for LongSumm 2020 task are reported in Table 8 . From this Table it can be inferred that (a) CN N 2 performs better in term of Rouge-2 and Rouge-L F1-measure, but in terms of Rouge-1 F1-measure, M M R 2 performs the best. Training vs. testing accuracy for the results obtained using CN N 2 are shown in Figure 4 . ", "cite_spans": [], "ref_spans": [ { "start": 546, "end": 553, "text": "Table 8", "ref_id": "TABREF8" }, { "start": 566, "end": 574, "text": "Table it", "ref_id": null }, { "start": 810, "end": 819, "text": "Figure 4", "ref_id": "FIGREF6" } ], "eq_spans": [], "section": "Discussion of Results", "sec_num": "4.4" }, { "text": "We have investigated the effects of using maximal marginal relevance (MMR) in developing the systems for three shared tasks: CL-SciSumm, CL-LaySumm, and LongSumm 2020. Another variant of MMR is also proposed by incorporating a length-based feature. For LongSumm, we have also investigated the effect of using a convolution neural network. As the goal of LaySumm is to generate Lay summary, which is understandable for a non-technical audience, we have tried a common word removal approach using the lexical database like WordNet, which fails due to non-presence of scientific terms. In the future, we would like to develop a more sophisticated approach for LaySumm generation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion and Future work", "sec_num": "5" }, { "text": "https://ornlcda.github.io/SDProc/ index.html", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "https://scikit-learn.org/stable/ modules/generated/sklearn.feature_ extraction.text.CountVectorizer.html", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "https://ornlcda.github.io/SDProc/ sharedtasks.html#laysumm", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Table 5: Results attained using MMR and it's variant for CL-LaySumm 2020. Here, R in second row stands for 'ROUGE'.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "Data ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Variant", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Probabilistic fasttext for multi-sense word embeddings", "authors": [ { "first": "Ben", "middle": [], "last": "Athiwaratkun", "suffix": "" }, { "first": "Andrew", "middle": [ "Gordon" ], "last": "Wilson", "suffix": "" }, { "first": "Anima", "middle": [], "last": "Anandkumar", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1806.02901" ] }, "num": null, "urls": [], "raw_text": "Ben Athiwaratkun, Andrew Gordon Wilson, and An- ima Anandkumar. 2018. Probabilistic fasttext for multi-sense word embeddings. arXiv preprint arXiv:1806.02901.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Scibert: A pretrained language model for scientific text", "authors": [ { "first": "Iz", "middle": [], "last": "Beltagy", "suffix": "" }, { "first": "Kyle", "middle": [], "last": "Lo", "suffix": "" }, { "first": "Arman", "middle": [], "last": "Cohan", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1903.10676" ] }, "num": null, "urls": [], "raw_text": "Iz Beltagy, Kyle Lo, and Arman Cohan. 2019a. Scib- ert: A pretrained language model for scientific text. arXiv preprint arXiv:1903.10676.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Scibert: A pretrained language model for scientific text", "authors": [ { "first": "Iz", "middle": [], "last": "Beltagy", "suffix": "" }, { "first": "Kyle", "middle": [], "last": "Lo", "suffix": "" }, { "first": "Arman", "middle": [], "last": "Cohan", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing", "volume": "", "issue": "", "pages": "3613--3618", "other_ids": { "DOI": [ "10.18653/v1/D19-1371" ] }, "num": null, "urls": [], "raw_text": "Iz Beltagy, Kyle Lo, and Arman Cohan. 2019b. Scib- ert: A pretrained language model for scientific text. In Proceedings of the 2019 Conference on Empiri- cal Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing, EMNLP-IJCNLP 2019, Hong Kong, China, November 3-7, 2019, pages 3613- 3618. Association for Computational Linguistics.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "The use of mmr, diversity-based reranking for reordering documents and producing summaries", "authors": [ { "first": "Jaime", "middle": [], "last": "Carbinell", "suffix": "" }, { "first": "Jade", "middle": [], "last": "Goldstein", "suffix": "" } ], "year": 2017, "venue": "ACM SIGIR Forum", "volume": "51", "issue": "", "pages": "209--210", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jaime Carbinell and Jade Goldstein. 2017. The use of mmr, diversity-based reranking for reordering doc- uments and producing summaries. In ACM SIGIR Forum, volume 51, pages 209-210. ACM New York, NY, USA.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Universal sentence encoder", "authors": [ { "first": "Daniel", "middle": [], "last": "Cer", "suffix": "" }, { "first": "Yinfei", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Sheng-Yi", "middle": [], "last": "Kong", "suffix": "" }, { "first": "Nan", "middle": [], "last": "Hua", "suffix": "" }, { "first": "Nicole", "middle": [], "last": "Limtiaco", "suffix": "" }, { "first": "Rhomni", "middle": [], "last": "St John", "suffix": "" }, { "first": "Noah", "middle": [], "last": "Constant", "suffix": "" }, { "first": "Mario", "middle": [], "last": "Guajardo-Cespedes", "suffix": "" }, { "first": "Steve", "middle": [], "last": "Yuan", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Tar", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1803.11175" ] }, "num": null, "urls": [], "raw_text": "Daniel Cer, Yinfei Yang, Sheng-yi Kong, Nan Hua, Nicole Limtiaco, Rhomni St John, Noah Constant, Mario Guajardo-Cespedes, Steve Yuan, Chris Tar, et al. 2018. Universal sentence encoder. arXiv preprint arXiv:1803.11175.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Overview and insights from scientific document summarization shared tasks 2020: CL-SciSumm, LaySumm and Long-Summ", "authors": [ { "first": "M", "middle": [ "K" ], "last": "Chandrasekaran", "suffix": "" }, { "first": "G", "middle": [], "last": "Feigenblat", "suffix": "" }, { "first": "Hovy", "middle": [ "E" ], "last": "", "suffix": "" }, { "first": "A", "middle": [], "last": "Ravichander", "suffix": "" }, { "first": "M", "middle": [], "last": "Shmueli-Scheuer", "suffix": "" }, { "first": "", "middle": [], "last": "De Waard", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the First Workshop on Scholarly Document Processing", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "M. K. Chandrasekaran, G. Feigenblat, Hovy. E., A. Ravichander, M. Shmueli-Scheuer, and A De Waard. 2020. Overview and insights from scientific document summarization shared tasks 2020: CL-SciSumm, LaySumm and Long- Summ. In Proceedings of the First Workshop on Scholarly Document Processing (SDP 2020).", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Overview and results: Cl-scisumm shared task", "authors": [ { "first": "Michihiro", "middle": [], "last": "Muthu Kumar Chandrasekaran", "suffix": "" }, { "first": "Dragomir", "middle": [], "last": "Yasunaga", "suffix": "" }, { "first": "Dayne", "middle": [], "last": "Radev", "suffix": "" }, { "first": "Min-Yen", "middle": [], "last": "Freitag", "suffix": "" }, { "first": "", "middle": [], "last": "Kan", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1907.09854" ] }, "num": null, "urls": [], "raw_text": "Muthu Kumar Chandrasekaran, Michihiro Yasunaga, Dragomir Radev, Dayne Freitag, and Min-Yen Kan. 2019. Overview and results: Cl-scisumm shared task 2019. arXiv preprint arXiv:1907.09854.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Scientific document summarization via citation contextualization and scientific discourse", "authors": [ { "first": "Arman", "middle": [], "last": "Cohan", "suffix": "" }, { "first": "Nazli", "middle": [], "last": "Goharian", "suffix": "" } ], "year": 2018, "venue": "International Journal on Digital Libraries", "volume": "19", "issue": "2-3", "pages": "287--303", "other_ids": {}, "num": null, "urls": [], "raw_text": "Arman Cohan and Nazli Goharian. 2018. Scientific document summarization via citation contextualiza- tion and scientific discourse. International Journal on Digital Libraries, 19(2-3):287-303.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Section mixture models for scientific document summarization", "authors": [ { "first": "M", "middle": [], "last": "John", "suffix": "" }, { "first": "", "middle": [], "last": "Conroy", "suffix": "" }, { "first": "T", "middle": [], "last": "Sashka", "suffix": "" }, { "first": "", "middle": [], "last": "Davis", "suffix": "" } ], "year": 2018, "venue": "International Journal on Digital Libraries", "volume": "19", "issue": "2-3", "pages": "305--322", "other_ids": {}, "num": null, "urls": [], "raw_text": "John M Conroy and Sashka T Davis. 2018. Section mixture models for scientific document summariza- tion. International Journal on Digital Libraries, 19(2-3):305-322.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Cooperative generator-discriminator networks for abstractive summarization with narrative flow", "authors": [ { "first": "Saadia", "middle": [], "last": "Gabriel", "suffix": "" }, { "first": "Antoine", "middle": [], "last": "Bosselut", "suffix": "" }, { "first": "Ari", "middle": [], "last": "Holtzman", "suffix": "" }, { "first": "Kyle", "middle": [], "last": "Lo", "suffix": "" }, { "first": "\u00c7", "middle": [], "last": "Asli", "suffix": "" }, { "first": "Yejin", "middle": [], "last": "Choi", "suffix": "" } ], "year": 2019, "venue": "ArXiv", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Saadia Gabriel, Antoine Bosselut, Ari Holtzman, Kyle Lo, Asli \u00c7 elikyilmaz, and Yejin Choi. 2019. Co- operative generator-discriminator networks for ab- stractive summarization with narrative flow. ArXiv, abs/1907.01272.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "The cl-scisumm shared task 2017: Results and key insights", "authors": [ { "first": "Kokil", "middle": [], "last": "Jaidka", "suffix": "" }, { "first": "Muthu", "middle": [], "last": "Kumar Chandrasekaran", "suffix": "" }, { "first": "Devanshu", "middle": [], "last": "Jain", "suffix": "" }, { "first": "Min-Yen", "middle": [], "last": "Kan", "suffix": "" } ], "year": 2017, "venue": "BIRNDL@SIGIR", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kokil Jaidka, Muthu Kumar Chandrasekaran, Devan- shu Jain, and Min-Yen Kan. 2017. The cl-scisumm shared task 2017: Results and key insights. In BIRNDL@SIGIR.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "The cl-scisumm shared task 2018: Results and key insights", "authors": [ { "first": "Kokil", "middle": [], "last": "Jaidka", "suffix": "" }, { "first": "Michihiro", "middle": [], "last": "Yasunaga", "suffix": "" }, { "first": "Muthu", "middle": [], "last": "Kumar Chandrasekaran", "suffix": "" }, { "first": "Dragomir", "middle": [], "last": "Radev", "suffix": "" }, { "first": "Min-Yen", "middle": [], "last": "Kan", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1909.00764" ] }, "num": null, "urls": [], "raw_text": "Kokil Jaidka, Michihiro Yasunaga, Muthu Ku- mar Chandrasekaran, Dragomir Radev, and Min- Yen Kan. 2019. The cl-scisumm shared task 2018: Results and key insights. arXiv preprint arXiv:1909.00764.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Convolutional neural networks for sentence classification", "authors": [ { "first": "Yoon", "middle": [], "last": "Kim", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)", "volume": "", "issue": "", "pages": "1746--1751", "other_ids": { "DOI": [ "10.3115/v1/D14-1181" ] }, "num": null, "urls": [], "raw_text": "Yoon Kim. 2014. Convolutional neural networks for sentence classification. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1746-1751, Doha, Qatar. Association for Computational Lin- guistics.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Poli2sum@ cl-scisumm-19: Identify, classify, and summarize cited text spans by means of ensembles of supervised models", "authors": [ { "first": "Luca", "middle": [], "last": "Moreno La Quatra", "suffix": "" }, { "first": "Elena", "middle": [], "last": "Cagliero", "suffix": "" }, { "first": "", "middle": [], "last": "Baralis", "suffix": "" } ], "year": 2019, "venue": "BIRNDL@ SIGIR", "volume": "", "issue": "", "pages": "233--246", "other_ids": {}, "num": null, "urls": [], "raw_text": "Moreno La Quatra, Luca Cagliero, and Elena Baralis. 2019. Poli2sum@ cl-scisumm-19: Identify, classify, and summarize cited text spans by means of ensem- bles of supervised models. In BIRNDL@ SIGIR, pages 233-246.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Talksumm: A dataset and scalable annotation method for scientific paper summarization based on conference talks", "authors": [ { "first": "Guy", "middle": [], "last": "Lev", "suffix": "" }, { "first": "Michal", "middle": [], "last": "Shmueli-Scheuer", "suffix": "" }, { "first": "Jonathan", "middle": [], "last": "Herzig", "suffix": "" }, { "first": "Achiya", "middle": [], "last": "Jerbi", "suffix": "" }, { "first": "David", "middle": [], "last": "Konopnicki", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1906.01351" ] }, "num": null, "urls": [], "raw_text": "Guy Lev, Michal Shmueli-Scheuer, Jonathan Herzig, Achiya Jerbi, and David Konopnicki. 2019. Talk- summ: A dataset and scalable annotation method for scientific paper summarization based on conference talks. arXiv preprint arXiv:1906.01351.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Mcrank: Learning to rank using multiple classification and gradient boosting", "authors": [ { "first": "Ping", "middle": [], "last": "Li", "suffix": "" }, { "first": "Qiang", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Christopher", "middle": [ "J" ], "last": "Burges", "suffix": "" } ], "year": 2008, "venue": "Advances in neural information processing systems", "volume": "", "issue": "", "pages": "897--904", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ping Li, Qiang Wu, and Christopher J Burges. 2008. Mcrank: Learning to rank using multiple classifica- tion and gradient boosting. In Advances in neural information processing systems, pages 897-904.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Wordnet: a lexical database for english", "authors": [ { "first": "A", "middle": [], "last": "George", "suffix": "" }, { "first": "", "middle": [], "last": "Miller", "suffix": "" } ], "year": 1995, "venue": "Communications of the ACM", "volume": "38", "issue": "11", "pages": "39--41", "other_ids": {}, "num": null, "urls": [], "raw_text": "George A Miller. 1995. Wordnet: a lexical database for english. Communications of the ACM, 38(11):39- 41.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Summarunner: A recurrent neural network based sequence model for extractive summarization of documents", "authors": [ { "first": "Ramesh", "middle": [], "last": "Nallapati", "suffix": "" }, { "first": "Feifei", "middle": [], "last": "Zhai", "suffix": "" }, { "first": "Bowen", "middle": [], "last": "Zhou", "suffix": "" } ], "year": 2017, "venue": "Thirty-First AAAI Conference on Artificial Intelligence", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ramesh Nallapati, Feifei Zhai, and Bowen Zhou. 2017. Summarunner: A recurrent neural network based se- quence model for extractive summarization of docu- ments. In Thirty-First AAAI Conference on Artificial Intelligence.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Abstractive text summarization using sequence-to-sequence rnns and beyond", "authors": [ { "first": "Ramesh", "middle": [], "last": "Nallapati", "suffix": "" }, { "first": "Bowen", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "Caglar", "middle": [], "last": "Gulcehre", "suffix": "" }, { "first": "Bing", "middle": [], "last": "Xiang", "suffix": "" } ], "year": 2016, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1602.06023" ] }, "num": null, "urls": [], "raw_text": "Ramesh Nallapati, Bowen Zhou, Caglar Gulcehre, Bing Xiang, et al. 2016. Abstractive text summariza- tion using sequence-to-sequence rnns and beyond. arXiv preprint arXiv:1602.06023.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Deep contextualized word representations", "authors": [ { "first": "E", "middle": [], "last": "Matthew", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Peters", "suffix": "" }, { "first": "Mohit", "middle": [], "last": "Neumann", "suffix": "" }, { "first": "Matt", "middle": [], "last": "Iyyer", "suffix": "" }, { "first": "Christopher", "middle": [], "last": "Gardner", "suffix": "" }, { "first": "Kenton", "middle": [], "last": "Clark", "suffix": "" }, { "first": "Luke", "middle": [], "last": "Lee", "suffix": "" }, { "first": "", "middle": [], "last": "Zettlemoyer", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1802.05365" ] }, "num": null, "urls": [], "raw_text": "Matthew E Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word repre- sentations. arXiv preprint arXiv:1802.05365.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Extractive single document summarization using binary differential evolution: Optimization of different sentence quality measures", "authors": [ { "first": "Naveen", "middle": [], "last": "Saini", "suffix": "" }, { "first": "Sriparna", "middle": [], "last": "Saha", "suffix": "" }, { "first": "Dhiraj", "middle": [], "last": "Chakraborty", "suffix": "" }, { "first": "Pushpak", "middle": [], "last": "Bhattacharyya", "suffix": "" } ], "year": 2019, "venue": "PloS one", "volume": "14", "issue": "11", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Naveen Saini, Sriparna Saha, Dhiraj Chakraborty, and Pushpak Bhattacharyya. 2019. Extractive single document summarization using binary differential evolution: Optimization of different sentence qual- ity measures. PloS one, 14(11):e0223477.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Pre-training bert on domain resources for short answer grading", "authors": [ { "first": "Chul", "middle": [], "last": "Sung", "suffix": "" }, { "first": "I", "middle": [], "last": "Tejas", "suffix": "" }, { "first": "Swarnadeep", "middle": [], "last": "Dhamecha", "suffix": "" }, { "first": "Tengfei", "middle": [], "last": "Saha", "suffix": "" }, { "first": "V", "middle": [ "Pulla" ], "last": "Ma", "suffix": "" }, { "first": "Rishi", "middle": [], "last": "Reddy", "suffix": "" }, { "first": "", "middle": [], "last": "Arora", "suffix": "" } ], "year": 2019, "venue": "EMNLP/IJCNLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chul Sung, Tejas I. Dhamecha, Swarnadeep Saha, Tengfei Ma, V. Pulla Reddy, and Rishi Arora. 2019. Pre-training bert on domain resources for short an- swer grading. In EMNLP/IJCNLP.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Xlnet: Generalized autoregressive pretraining for language understanding", "authors": [ { "first": "Zhilin", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Zihang", "middle": [], "last": "Dai", "suffix": "" }, { "first": "Yiming", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Jaime", "middle": [], "last": "Carbonell", "suffix": "" }, { "first": "R", "middle": [], "last": "Russ", "suffix": "" }, { "first": "Quoc V", "middle": [], "last": "Salakhutdinov", "suffix": "" }, { "first": "", "middle": [], "last": "Le", "suffix": "" } ], "year": 2019, "venue": "Advances in neural information processing systems", "volume": "", "issue": "", "pages": "5753--5763", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Car- bonell, Russ R Salakhutdinov, and Quoc V Le. 2019. Xlnet: Generalized autoregressive pretraining for language understanding. In Advances in neural in- formation processing systems, pages 5753-5763.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Scisummnet: A large annotated corpus and content-impact models for scientific paper summarization with citation networks", "authors": [ { "first": "Michihiro", "middle": [], "last": "Yasunaga", "suffix": "" }, { "first": "Jungo", "middle": [], "last": "Kasai", "suffix": "" }, { "first": "Rui", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Alexander", "middle": [ "R" ], "last": "Fabbri", "suffix": "" }, { "first": "Irene", "middle": [], "last": "Li", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Friedman", "suffix": "" }, { "first": "", "middle": [], "last": "Dragomir R Radev", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the AAAI Conference on Artificial Intelligence", "volume": "33", "issue": "", "pages": "7386--7393", "other_ids": {}, "num": null, "urls": [], "raw_text": "Michihiro Yasunaga, Jungo Kasai, Rui Zhang, Alexan- der R Fabbri, Irene Li, Dan Friedman, and Dragomir R Radev. 2019. Scisummnet: A large an- notated corpus and content-impact models for scien- tific paper summarization with citation networks. In Proceedings of the AAAI Conference on Artificial In- telligence, volume 33, pages 7386-7393.", "links": null } }, "ref_entries": { "FIGREF0": { "uris": null, "text": "Proposed Architecture for Task 1(A) and Task 1(B) for CL-SciSumm 2020.", "num": null, "type_str": "figure" }, "FIGREF1": { "uris": null, "text": "LaySumm test data statistics.", "num": null, "type_str": "figure" }, "FIGREF2": { "uris": null, "text": "(1) and (2). The best values of the parameters used in M M R 1 and M M R 2 are highlighted (in bold) inTable 6, i.e., for M M R 1 , \u03bb 1 = 0.75 and for M M R 2 , \u03bb 1 = 0.75, \u03bb 2 = 0.20, are the best val-", "num": null, "type_str": "figure" }, "FIGREF3": { "uris": null, "text": "1 0.06548 0.01006 0.24342 0.12235 0.09604 0.01873 ELMO + M M R 1 0.09119 0.01435 0.2328 0.14525 0.11379 0.02512 FastText + M M R 1 0.08718 0.01111 0.25724 0.12458 0.10957 0.01945 SciBERT + M M R 1 0.13277 0.01211 0.18978 0.07994 0.14022 0.01846 USE + M M R 1 0.10521 0.01438 0.27462 0.13962 0.12955 0.02507 XLNET + M M R 1 0.05816 0.00825 0.17212 0.09749 0.08559 0.0176 M M R 2 0.15067 0.07851 0.13976 0.07268 0.15073 0.10237 M M R 2 (full text) 0.03909 0.03708 0.12305 0.06701 0.05206 0.0503", "num": null, "type_str": "figure" }, "FIGREF5": { "uris": null, "text": "Architecture used for LongSumm2020.", "num": null, "type_str": "figure" }, "FIGREF6": { "uris": null, "text": "Training vs validation accuracy", "num": null, "type_str": "figure" }, "TABREF1": { "text": "", "num": null, "type_str": "table", "html": null, "content": "
: Task 1B data statistics
2.2.3 Task 2
" }, "TABREF2": { "text": "Performance of different system runs for Task 1A", "num": null, "type_str": "table", "html": null, "content": "
Task 1B
VariantPrecisionRecallF1
Micro Macro Micro Macro Micro Macro
ALBERT 0.4649 0.4102 0.0789 0.0716 0.1349 0.122
ELMO0.3333 0.2922 0.0461 0.0463 0.0809 0.0799
FastText 0.3882 0.4386 0.0985 0.0991 0.1571 0.1617
SciBERT 0.2644 0.2715 0.0341 0.0311 0.0604 0.0558
USE0.4900 0.4849 0.1469 0.1461 0.2261 0.2245
XLNET 0.0517 0.0403 0.0044 0.0049 0.0082 0.0088
" }, "TABREF3": { "text": "Performance of different system runs for Task 1B", "num": null, "type_str": "table", "html": null, "content": "" }, "TABREF4": { "text": "Performance (F1 scores) of different system runs for Task 2", "num": null, "type_str": "table", "html": null, "content": "
" }, "TABREF5": { "text": "Study of parameters used in M M R 1 and M M R 2 for Lay Summary generation. Here, we have used only ABSTRACT for generating summary.", "num": null, "type_str": "table", "html": null, "content": "
" }, "TABREF8": { "text": "Results of our top system runs for LongSumm 2020 shared task", "num": null, "type_str": "table", "html": null, "content": "
" } } } }