{ "paper_id": "S14-1010", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T15:32:31.140446Z" }, "title": "Text Summarization through Entailment-based Minimum Vertex Cover", "authors": [ { "first": "Anand", "middle": [], "last": "Gupta", "suffix": "", "affiliation": { "laboratory": "", "institution": "NSIT", "location": { "region": "New Delhi", "country": "India" } }, "email": "" }, { "first": "Manpreet", "middle": [], "last": "Kaur", "suffix": "", "affiliation": { "laboratory": "", "institution": "NSIT", "location": { "settlement": "New Delhi", "country": "India" } }, "email": "" }, { "first": "Adarsh", "middle": [], "last": "Singh", "suffix": "", "affiliation": { "laboratory": "", "institution": "NSIT", "location": { "settlement": "New Delhi", "country": "India" } }, "email": "" }, { "first": "Aseem", "middle": [], "last": "Goel", "suffix": "", "affiliation": { "laboratory": "", "institution": "NSIT", "location": { "settlement": "New Delhi", "country": "India" } }, "email": "" }, { "first": "Shachar", "middle": [], "last": "Mirkin", "suffix": "", "affiliation": { "laboratory": "", "institution": "Xerox Research Centre Europe", "location": { "settlement": "Meylan", "country": "France" } }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Sentence Connectivity is a textual characteristic that may be incorporated intelligently for the selection of sentences of a well meaning summary. However, the existing summarization methods do not utilize its potential fully. The present paper introduces a novel method for singledocument text summarization. It poses the text summarization task as an optimization problem, and attempts to solve it using Weighted Minimum Vertex Cover (WMVC), a graph-based algorithm. Textual entailment, an established indicator of semantic relationships between text units, is used to measure sentence connectivity and construct the graph on which WMVC operates. Experiments on a standard summarization dataset show that the suggested algorithm outperforms related methods.", "pdf_parse": { "paper_id": "S14-1010", "_pdf_hash": "", "abstract": [ { "text": "Sentence Connectivity is a textual characteristic that may be incorporated intelligently for the selection of sentences of a well meaning summary. However, the existing summarization methods do not utilize its potential fully. The present paper introduces a novel method for singledocument text summarization. It poses the text summarization task as an optimization problem, and attempts to solve it using Weighted Minimum Vertex Cover (WMVC), a graph-based algorithm. Textual entailment, an established indicator of semantic relationships between text units, is used to measure sentence connectivity and construct the graph on which WMVC operates. Experiments on a standard summarization dataset show that the suggested algorithm outperforms related methods.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "In the present age of digital revolution with proliferating numbers of internet-connected devices, we are facing an exponential rise in the volume of available information. Users are constantly facing the problem of deciding what to read and what to skip. Text summarization provides a practical solution to this problem, causing a resurgence in research in this field.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Given a topic of interest, a standard search often yields a large number of documents. Many of them are not of the user's interest. Rather than going through the entire result-set, the reader may read a gist of a document, produced via summarization tools, and then decide whether to fully read the document or not, thus saving a substantial amount of time. According to Jones (2007) , a summary can be defined as \"a reductive transformation of source text to summary text through content reduction by selection and/or generalization on what is important in the source\". Summarization based on content reduction by selection is referred to as extraction (identifying and including the important sentences in the final summary), whereas a summary involving content reduction by generalization is called abstraction (reproducing the most informative content in a new way).", "cite_spans": [ { "start": 371, "end": 383, "text": "Jones (2007)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The present paper focuses on extraction-based single-document summarization. We formulate the task as a graph-based optimization problem, where vertices represent the sentences and edges the connections between sentences. Textual entailment (Giampiccolo et al., 2007) is employed to estimate the degree of connectivity between sentences, and subsequently to assign a weight to each vertex of the graph. Then, the Weighted Minimum Vertex Cover, a classical graph algorithm, is used to find the minimal set of vertices (that is -sentences) that forms a cover. The idea is that such cover of well-connected vertices would correspond to a cover of the salient content of the document.", "cite_spans": [ { "start": 241, "end": 267, "text": "(Giampiccolo et al., 2007)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The rest of the paper is organized as follows: In Section 2, we discuss related work and describe the WMVC algorithm. In Section 3, we propose a novel summarization method, and in Section 4, experiments and results are presented. Finally, in Section 5, we conclude and outline future research directions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Extractive text summarization is the task of identifying those text segments which provide important information about the gist of the document -the salient units of the text. In (Marcu, 2008) , salient units are determined as the ones that contain frequently-used words, contain words that are within titles and headings, are located at the beginning or at the end of sections, contain key phrases and are the most highly connected to other parts of the text. In this work we focus on the last of the above criteria, connectivity, to find highly connected sentences in a document. Such sentences often contain information that is found in other sentences, and are therefore natural candidates to be included in the summary.", "cite_spans": [ { "start": 179, "end": 192, "text": "(Marcu, 2008)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Background", "sec_num": "2" }, { "text": "The connectivity between sentences has been previously exploited for extraction-based summarization. Salton et al. (1997) generate intra-document links between passages of a document using automatic hypertext link generation algorithms. Mani and Bloedorn (1997) use the number of shared words, phrases and co-references to measure connectedness among sentences. In (Barzilay and Elhadad, 1999) , lexical chains are constructed based on words relatedness.", "cite_spans": [ { "start": 101, "end": 121, "text": "Salton et al. (1997)", "ref_id": "BIBREF14" }, { "start": 237, "end": 261, "text": "Mani and Bloedorn (1997)", "ref_id": "BIBREF10" }, { "start": 365, "end": 393, "text": "(Barzilay and Elhadad, 1999)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2.1" }, { "text": "Textual entailment (TE) was exploited recently for text summarization in order to find the highly connected sentences in the document. Textual entailment is an asymmetric relation between two text fragments specifying whether one fragment can be inferred from the other. Tatar et al. (2008) have proposed a method called Logic Text Tiling (LTT), which uses TE for sentence scoring that is equal to the number of entailed sentences and to form text segments comprising of highly connected sentences. Another method called Analog Textual Entailment and Spectral Clustering (ATESC), suggested in (Gupta et al., 2012) , also uses TE for sentence scoring, using analog scores.", "cite_spans": [ { "start": 271, "end": 290, "text": "Tatar et al. (2008)", "ref_id": "BIBREF16" }, { "start": 593, "end": 613, "text": "(Gupta et al., 2012)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2.1" }, { "text": "We use a graph-based algorithm to produce the summary. Graph-based ranking algorithms have been employed for text summarization in the past, with similar representation to ours. Vertices represent text units (words, phrases or sentences) and an edge between two vertices represent any kind of relationship between two text units. Scores are assigned to the vertices using some relevant criteria to select the vertices with the highest scores. In (Mihalcea and Tarau, 2004) , content overlap between sentences is used to add edges between two vertices and Page Rank (Page et al., 1999) is used for scoring the vertices. Erkan and Radev (2004) use inter-sentence cosine similarity based on word overlap and tf-idf weighting to identify relations between sentences. In our paper, we use TE to compute connectivity between nodes of the graph and apply the weighted minimum vertex cover (WMVC) algorithm on the graph to select the sentences for the summary.", "cite_spans": [ { "start": 446, "end": 472, "text": "(Mihalcea and Tarau, 2004)", "ref_id": "BIBREF12" }, { "start": 565, "end": 584, "text": "(Page et al., 1999)", "ref_id": "BIBREF13" }, { "start": 619, "end": 641, "text": "Erkan and Radev (2004)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2.1" }, { "text": "WMVC is a combinatorial optimization problem listed within the classical NP-complete problems (Garey and Johnson, 1979; Cormen et al., 2001) . Over the years, it has caught the attention of many researchers, due to its NP-completeness, and also because its formulation complies with many real world problems.", "cite_spans": [ { "start": 94, "end": 119, "text": "(Garey and Johnson, 1979;", "ref_id": "BIBREF4" }, { "start": 120, "end": 140, "text": "Cormen et al., 2001)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Weighted MVC", "sec_num": "2.2" }, { "text": "Weighted Minimum Vertex Cover Given a weighted graph G = (V, E, w), such that w is a positive weight (cost) function on the vertices, w : V \u2192 R, a weighted minimum vertex cover of G is a subset of the vertices, C \u2286 V such that for every edge (u, v) \u2208 E either u \u2208 C or v \u2208 C (or both), and the total sum of the weights is minimized.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Weighted MVC", "sec_num": "2.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "C = argmin C v\u2208 C w(v)", "eq_num": "(1)" } ], "section": "Weighted MVC", "sec_num": "2.2" }, { "text": "3 Weighted MVC for text summarization", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Weighted MVC", "sec_num": "2.2" }, { "text": "We formulate the text summarization task as a WMVC problem. The input document to be summarized is represented as a weighted graph G = (V, E, w), where each of v \u2208 V corresponds to a sentence in the document; an edge (u, v) \u2208 E exists if either u entails v or v entails u with a value at least as high as an empirically-set threshold. A weight w is then assigned to each sentence based on (negated) TE values (see Section 3.2 for further details). WMVC returns a cover C which is a subset of the sentences with a minimum total weight, corresponding to the best connected sentences in the document. The cover is our output -the summary of the input document. Our proposed method, shown in Figure 1 , consists of the following main steps.", "cite_spans": [], "ref_spans": [ { "start": 688, "end": 696, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Weighted MVC", "sec_num": "2.2" }, { "text": "1. Intra-sentence textual entailment score computation 2. Entailment-based connectivity scoring 3. Entailment connectivity graph construction 4. Application of WMVC to the graph We elaborate on each of these steps in the following sections.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Weighted MVC", "sec_num": "2.2" }, { "text": "Given a document d for which summary is to be generated, we represent d as an array of sentences Id Sentence S1", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Computing entailment scores", "sec_num": "3.1" }, { "text": "A representative of the African National Congress said Saturday the South African government may release black nationalist leader Nelson Mandela as early as Tuesday.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Computing entailment scores", "sec_num": "3.1" }, { "text": "\"There are very strong rumors in South Africa today that on Nov. 15 Nelson Mandela will be released,\" said Yusef Saloojee, chief representative in Canada for the ANC, which is fighting to end white-minority rule in South Africa.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "S2", "sec_num": null }, { "text": "Mandela the 70-year-old leader of the ANC jailed 27 years ago, was sentenced to life in prison for conspiring to overthrow the South African government.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "S3", "sec_num": null }, { "text": "He was transferred from prison to a hospital in August for treatment of tuberculosis.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "S4", "sec_num": null }, { "text": "Since then, it has been widely rumoured Mandela will be released by Christmas in a move to win strong international support for the South African government.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "S5", "sec_num": null }, { "text": "\"It will be a victory for the people of South Africa and indeed a victory for the whole of Africa,\" Saloojee told an audience at the University of Toronto.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "S6", "sec_num": null }, { "text": "A South African government source last week indicated recent rumours of Mandela's impending release were orchestrated by members of the anti-apartheid movement to pressure the government into taking some action.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "S7", "sec_num": null }, { "text": "And a prominent anti-apartheid activist in South Africa said there has been \"no indication (Mandela) would pe released today or in the near future.\" S9", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "S8", "sec_num": null }, { "text": "Apartheid is South Africa's policy of racial separation. Summary \"There are very strong rumors in South Africa today that on Nov.15 Nelson Mandela will pe released,\" said Yusef Saloojee, chief representative in Canada for the ANC, which is fighting to end white-minority rule in South Africa. He was transferred from prison to a hospital in August for treatment of tuberculosis. A South African government source last week indicated recent rumours of Mandela's impending release were orchestrated by members of the anti-apartheid movement to pressure the government into taking some action. Apartheid is South Africa's policy of racial separation. Table 1 . We use this article to demonstrate the steps of our algorithm.", "cite_spans": [], "ref_spans": [ { "start": 648, "end": 655, "text": "Table 1", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "S8", "sec_num": null }, { "text": "Then, we compute a TE score between every possible pair of sentences in D using a textual entailment tool. TE scores for all the pairs are stored in a sentence entailment matrix, SE N \u00d7N . An entry SE [i, j] in the matrix represents the extent by which sentence i entails sentence j. The sentence entailment matrix produced for our example document is shown in Table 3 : Connectivity Scores of the sentences of article AP881113-0007.", "cite_spans": [ { "start": 201, "end": 207, "text": "[i, j]", "ref_id": null } ], "ref_spans": [ { "start": 361, "end": 368, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "S8", "sec_num": null }, { "text": "Our assumption is that entailment between sentences indicates connectivity, that -as mentioned above -is an indicator of sentence salience. More specifically, salience of a sentence is determined by the degree by which it entails other sentences in the document. We thus use the sentence entailment matrix to compute a connectivity score for each sentence by summing the entailment scores of the sentence with respect to the rest of the sentences in the document, and denote this sum as ConnScore. Formally, ConnScore for sentence i is computed as follows.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Connectivity scores", "sec_num": "3.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "ConnScore[i] = i = j SE [i, j]", "eq_num": "(2)" } ], "section": "Connectivity scores", "sec_num": "3.2" }, { "text": "Applying it to each sentence in the document, we obtain the ConnScore 1\u00d7N vector. The sentence connectivity scores corresponding to Table 2 are shown in Table 3 .", "cite_spans": [], "ref_spans": [ { "start": 132, "end": 139, "text": "Table 2", "ref_id": "TABREF1" }, { "start": 153, "end": 160, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "Connectivity scores", "sec_num": "3.2" }, { "text": "The more a sentence is connected, the higher its connectivity score. To adapt the scores to the WMVC algorithm, that searches for a minimal solution, we convert the scores into positive weights in inverted order:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Entailment connectivity graph construction", "sec_num": "3.3" }, { "text": "w[i] = \u2212ConnScore[i] + Z (3) w[i]", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Entailment connectivity graph construction", "sec_num": "3.3" }, { "text": "is the score that is assigned to the vertex of sentence i; Z is a large constant, meant to keep the scores positive. In this paper, Z has been assigned value = 100. Now, the better a sentence is connected, the lower its weight. Given the weights, we construct an undirected weighted entailment connectivity graph, G(V, E, w), for the document d. V consists of vertices for the document's sentences, and E are edges that correspond to the entailment relations between the sentences. w is the weight explained above. We create an edge between two vertices as explained below. Suppose that S i and S j are two sentences in d, with entailment scores SE[i, j] and SE [j, i] between them. We set a threshold \u03c4 for the entailment scores as the mean of all entailment values in the matrix SE. We add an edge (i, j) to G if SE[i, j] \u03c4 OR SE[j, i] \u03c4 , i.e. if at least one of them is as high as the threshold. Figure 2 shows the connectivity graph constructed for the example in Table 1. ", "cite_spans": [ { "start": 662, "end": 668, "text": "[j, i]", "ref_id": null } ], "ref_spans": [ { "start": 900, "end": 908, "text": "Figure 2", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Entailment connectivity graph construction", "sec_num": "3.3" }, { "text": "Finally, we apply the weighted minimum vertex cover algorithm to find the minimal vertex cover, which would be the document's summary. We use integer linear programming (ILP) for finding a minimum cover. This algorithm is a 2approximation for the problem, meaning it is an efficient (polynomial-time) algorithm, guaranteed to find a solution that is no more than 2 times bigger than the optimal solution. 1 The algorithm's 1 We have used an implementation of ILP for WMVC in MATLAB, grMinVerCover.", "cite_spans": [ { "start": 423, "end": 424, "text": "1", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Applying WMVC", "sec_num": "3.4" }, { "text": "input is G = (V, E, w), a weighted graph where each vertex v i \u2208 V (1 \u2264 i \u2264 n) has weight w i . Its output is a minimal vertex cover C of G, containing a subset of the vertices V . We then list these sentences as our summary, according to their original order in the document.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Applying WMVC", "sec_num": "3.4" }, { "text": "After applying WMVC to the graph in Figure 2 , the cover C returned by the algorithm is {S 2 , S 4 , S 7 , S 9 } (highlighted in Figure 2) .", "cite_spans": [], "ref_spans": [ { "start": 36, "end": 44, "text": "Figure 2", "ref_id": "FIGREF1" }, { "start": 129, "end": 138, "text": "Figure 2)", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Applying WMVC", "sec_num": "3.4" }, { "text": "Whenever a summary is required, a word-limit on the summary is specified. We find the threshold which results with a cover that matches the word limit through binary search.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Applying WMVC", "sec_num": "3.4" }, { "text": "We have conducted experiments on the singledocument summarization task of the DUC 2002 dataset 2 , using a random sample that contains 60 news articles picked from each of the 60 clusters available in the dataset. The target summary length limit has been set to 100 words. We used version 2.1.1 of BIUTEE (Stern and Dagan, 2012) , a transformation-based TE system to compute textual entailment score between pairs of sentences. 3 BIUTEE was trained with 600 texthypothesis pairs of the RTE-5 dataset (Bentivogli et al., 2009) .", "cite_spans": [ { "start": 305, "end": 328, "text": "(Stern and Dagan, 2012)", "ref_id": "BIBREF15" }, { "start": 500, "end": 525, "text": "(Bentivogli et al., 2009)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Experimental settings", "sec_num": "4.1" }, { "text": "We have compared our method's performance with the following re-implemented methods:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Baselines", "sec_num": "4.1.1" }, { "text": "1. Sentence selection with tf-idf: In this baseline, sentences are ranked based on the sum of the tf-idf scores of all the words except stopwords they contain, where idf figures are computed from the dataset of 60 documents. Top ranking sentences are added to the summary one by one, until the word limit is reached.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Baselines", "sec_num": "4.1.1" }, { "text": "3. ATESC : (see Section 2)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "LTT: (see Section 2)", "sec_num": "2." }, { "text": "We have evaluated the method's performance using ROUGE (Lin, 2004 quality of an automatically-generated summary by comparing it to a \"gold-standard\", typically a human generated summary. ROUGE-n measures ngram precision and recall of a candidate summary with respect to a set of reference summaries. We compare the system-generated summary with two reference summaries for each article in the dataset, and show the results for ROUGE-1, ROUGE-2 and ROUGE-SU4 that allows skips within n-grams. These metrics were shown to perform well for single document text summarization, especially for short summaries. Specifically, Lin and Hovy (2003) showed that ROUGE-1 achieves high correlation with human judgments. 4", "cite_spans": [ { "start": 55, "end": 65, "text": "(Lin, 2004", "ref_id": "BIBREF9" }, { "start": 619, "end": 638, "text": "Lin and Hovy (2003)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Evaluation metrics", "sec_num": "4.1.2" }, { "text": "The results for ROUGE-1, ROUGE-2 and ROUGE-SU4 are shown in Tables 4, 5 and 6, respectively. For each, we show the precision (P), recall (R) and F 1 scores. Boldface marks the highest score in each table. As shown in the tables, our method achieves the best score for each of the three metrics.", "cite_spans": [], "ref_spans": [ { "start": 60, "end": 71, "text": "Tables 4, 5", "ref_id": "TABREF4" } ], "eq_spans": [], "section": "Results", "sec_num": "4.2" }, { "text": "The entailment connectivity graph generated conveys information about the connectivity of sentences in the document, an important parameter for indicating the salience of a sentences. The purpose of the WMVC is therefore to find a subset of the sentences that are well-connected and cover all the content of all the sentences. Note that merely selecting the sentences on the basis of a greedy approach, that picks the those sentences with the highest connectivity score, does not ensure that all edges of the graph are cov-4 See (Lin, 2004) for formal definitions of these metrics. Table 6 : ROUGE-SU4 results.", "cite_spans": [ { "start": 529, "end": 540, "text": "(Lin, 2004)", "ref_id": "BIBREF9" } ], "ref_spans": [ { "start": 582, "end": 589, "text": "Table 6", "ref_id": null } ], "eq_spans": [], "section": "Analysis", "sec_num": "4.3" }, { "text": "ered, i.e. it does not ensure that all the information is covered in the summary. In Figure 3 , we illustrate the difference between WMVC (left) and a greedy algorithm (right) over our example document. The vertices selected by each algorithm are highlighted. The selected set by WMVC, {S 2 , S 4 , S 7 , S 9 }, covers all the edges in the graph. In contrast, using the greedy algorithm, the subset of vertices selected on the basis of highest scores is {S 2 , S 3 , S 7 , S 8 }. There, several edges are not covered (e.g. (S 1 \u2192 S 9 )).", "cite_spans": [], "ref_spans": [ { "start": 85, "end": 93, "text": "Figure 3", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "Analysis", "sec_num": "4.3" }, { "text": "It is therefore much more in sync with the summarization goal of finding a subset of sentences that conveys the important information of the document in a compressed manner. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Analysis", "sec_num": "4.3" }, { "text": "The paper presents a novel method for singledocument extractive summarization. We formulate the summarization task as an optimization problem and employ the weighted minimum vertex cover algorithm on a graph based on textual entailment relations between sentences. Our method has outperformed previous methods that employed TE for summarization as well as a frequencybased baseline. For future work, we wish to apply our algorithm on smaller segments of the sentences, using partial textual entailment Levy et al. (2013) , where we may obtain more reliable entailment measurements, and to apply the same approach for multi-document summarization.", "cite_spans": [ { "start": 502, "end": 520, "text": "Levy et al. (2013)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Conclusions and future work", "sec_num": "5" }, { "text": "http://www-nlpir.nist.gov/projects/ duc/data/2002_data.html 3 Available at: http://www.cs.biu.ac.il/ nlp/downloads/biutee.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Using lexical chains for text summanzauon", "authors": [ { "first": "Regina", "middle": [], "last": "Barzilay", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Elhadad", "suffix": "" } ], "year": 1999, "venue": "Advances in Automatic Text Summarization", "volume": "", "issue": "", "pages": "111--121", "other_ids": {}, "num": null, "urls": [], "raw_text": "Regina Barzilay and Michael Elhadad. 1999. Using lexical chains for text summanzauon. In In Inder- jeet Mani and Mark T. Maybury, editors, Advances in Automatic Text Summarization, pages 111-121, The MIT Press, 1999.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "The fifth pascal recognizing textual entailment challenge", "authors": [ { "first": "Luisa", "middle": [], "last": "Bentivogli", "suffix": "" }, { "first": "Ido", "middle": [], "last": "Dagan", "suffix": "" }, { "first": "Hoa", "middle": [ "Trang" ], "last": "Dang", "suffix": "" }, { "first": "Danilo", "middle": [], "last": "Giampiccolo", "suffix": "" }, { "first": "Bernardo", "middle": [], "last": "Magnini", "suffix": "" } ], "year": 2009, "venue": "Proceedings of Text Analysis Conference", "volume": "", "issue": "", "pages": "14--24", "other_ids": {}, "num": null, "urls": [], "raw_text": "Luisa Bentivogli, Ido Dagan, Hoa Trang Dang, Danilo Giampiccolo, and Bernardo Magnini. 2009. The fifth pascal recognizing textual entailment chal- lenge. In Proceedings of Text Analysis Conference, pages 14-24, Gaithersburg, Maryland USA.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Introduction to Algorithms", "authors": [ { "first": "H", "middle": [], "last": "Thomas", "suffix": "" }, { "first": "Charles", "middle": [ "E" ], "last": "Cormen", "suffix": "" }, { "first": "Ronald", "middle": [ "L" ], "last": "Leiserson", "suffix": "" }, { "first": "Clifford", "middle": [], "last": "Rivest", "suffix": "" }, { "first": "", "middle": [], "last": "Stein", "suffix": "" } ], "year": 2001, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest, and Clifford Stein. 2001. Introduction to Algorithms. McGraw-Hill, New York, 2nd edition.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Lexrank: Graph-based lexical centrality as salience in text summarization", "authors": [ { "first": "Gunes", "middle": [], "last": "Erkan", "suffix": "" }, { "first": "Dragomir", "middle": [ "R" ], "last": "Radev", "suffix": "" } ], "year": 2004, "venue": "Journal of Artificial Intellignece Research(JAIR)", "volume": "22", "issue": "1", "pages": "457--479", "other_ids": {}, "num": null, "urls": [], "raw_text": "Gunes Erkan and Dragomir R. Radev. 2004. Lexrank: Graph-based lexical centrality as salience in text summarization. Journal of Artificial Intellignece Research(JAIR), 22(1):457-479.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Computers and Intractability: A Guide to the Theory of NP-Completeness", "authors": [ { "first": "R", "middle": [], "last": "Michael", "suffix": "" }, { "first": "David", "middle": [ "S" ], "last": "Garey", "suffix": "" }, { "first": "", "middle": [], "last": "Johnson", "suffix": "" } ], "year": 1979, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Michael R. Garey and David S. Johnson. 1979. Com- puters and Intractability: A Guide to the Theory of NP-Completeness. FREEMAN, New York.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "The third pascal recognizing textual entailment challenge", "authors": [ { "first": "Danilo", "middle": [], "last": "Giampiccolo", "suffix": "" }, { "first": "Bernardo", "middle": [], "last": "Magnini", "suffix": "" }, { "first": "Ido", "middle": [], "last": "Dagan", "suffix": "" }, { "first": "Bill", "middle": [], "last": "Dolan", "suffix": "" } ], "year": 2007, "venue": "Proceedings of the Association for Computational Linguistics, ACL'07", "volume": "", "issue": "", "pages": "1--9", "other_ids": {}, "num": null, "urls": [], "raw_text": "Danilo Giampiccolo, Bernardo Magnini, Ido Dagan, and Bill Dolan. 2007. The third pascal recognizing textual entailment challenge. In Proceedings of the Association for Computational Linguistics, ACL'07, pages 1-9, Prague, Czech Republic.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Analog textual entailment and spectral clustering (atesc) based summarization", "authors": [ { "first": "Anand", "middle": [], "last": "Gupta", "suffix": "" }, { "first": "Manpreet", "middle": [], "last": "Kaur", "suffix": "" }, { "first": "Arjun", "middle": [], "last": "Singh", "suffix": "" }, { "first": "Ashish", "middle": [], "last": "Sachdeva", "suffix": "" }, { "first": "Shruti", "middle": [], "last": "Bhati", "suffix": "" } ], "year": 2012, "venue": "Lecture Notes in Computer Science", "volume": "", "issue": "", "pages": "101--110", "other_ids": {}, "num": null, "urls": [], "raw_text": "Anand Gupta, Manpreet Kaur, Arjun Singh, Ashish Sachdeva, and Shruti Bhati. 2012. Analog textual entailment and spectral clustering (atesc) based sum- marization. In Lecture Notes in Computer Science, Springer, pages 101-110, New Delhi, India.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Automatic summarizing: The state of the art", "authors": [ { "first": "Karen Spark", "middle": [], "last": "Jones", "suffix": "" } ], "year": 2007, "venue": "Information Processing and Management", "volume": "43", "issue": "", "pages": "1449--1481", "other_ids": {}, "num": null, "urls": [], "raw_text": "Karen Spark Jones. 2007. Automatic summarizing: The state of the art. Information Processing and Management, 43:1449-1481.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Automatic evaluation of summaries using n-gram cooccurrence statistics", "authors": [ { "first": "Omer", "middle": [], "last": "Levy", "suffix": "" }, { "first": "Torsten", "middle": [], "last": "Zesch", "suffix": "" }, { "first": "Ido", "middle": [], "last": "Dagan", "suffix": "" }, { "first": "Iryna", "middle": [], "last": "Gurevych", "suffix": "" } ], "year": 2003, "venue": "Proceedings of the 2003 Conference of the North American Chapter of the Association for Computational Linguistics on Human Language Technology", "volume": "1", "issue": "", "pages": "71--78", "other_ids": {}, "num": null, "urls": [], "raw_text": "Omer Levy, Torsten Zesch, Ido Dagan, and Iryna Gurevych. 2013. Recognizing partial textual entail- ment. In Proceedings of the Association for Compu- tational Linguistics, pages 17-23, Sofia, Bulgaria. Chin-Yew Lin and Eduard Hovy. 2003. Auto- matic evaluation of summaries using n-gram co- occurrence statistics. In Proceedings of the 2003 Conference of the North American Chapter of the Association for Computational Linguistics on Hu- man Language Technology-Volume 1, pages 71-78, Edmonta, Canada, 27 May-June 1.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Rouge: A package for automatic evaluation of summaries", "authors": [ { "first": "Chin-Yew", "middle": [], "last": "Lin", "suffix": "" } ], "year": 2004, "venue": "Proceedings of the Workshop on Text Summarization Branches Out", "volume": "", "issue": "", "pages": "25--26", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chin-Yew Lin. 2004. Rouge: A package for auto- matic evaluation of summaries. In Proceedings of the Workshop on Text Summarization Branches Out, pages 25-26, Barcelona, Spain.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Multidocument summarization by graph search and matching", "authors": [ { "first": "Inderjeet", "middle": [], "last": "Mani", "suffix": "" }, { "first": "Eric", "middle": [], "last": "Bloedorn", "suffix": "" } ], "year": 1997, "venue": "Proceedings of the Fourteenth National Conference on Articial Intelligence (AAAI-97)", "volume": "", "issue": "", "pages": "622--628", "other_ids": {}, "num": null, "urls": [], "raw_text": "Inderjeet Mani and Eric Bloedorn. 1997. Multi- document summarization by graph search and matching. In Proceedings of the Fourteenth Na- tional Conference on Articial Intelligence (AAAI- 97), American Association for Articial Intelligence, pages 622-628, Providence, Rhode Island.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "From discourse structure to text summaries", "authors": [ { "first": "Daniel", "middle": [], "last": "Marcu", "suffix": "" } ], "year": 2008, "venue": "Proceedings of the ACL/EACL '97, Workshop on Intelligent Scalable Text Summarization", "volume": "", "issue": "", "pages": "82--88", "other_ids": {}, "num": null, "urls": [], "raw_text": "Daniel Marcu. 2008. From discourse structure to text summaries. In Proceedings of the ACL/EACL '97, Workshop on Intelligent Scalable Text Summariza- tion, pages 82-88, Madrid, Spain.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Textrank: Bringing order into texts", "authors": [ { "first": "Rada", "middle": [], "last": "Mihalcea", "suffix": "" }, { "first": "Paul", "middle": [], "last": "Tarau", "suffix": "" } ], "year": 2004, "venue": "Proceedings of EMNLP", "volume": "4", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rada Mihalcea and Paul Tarau. 2004. Textrank: Bringing order into texts. In Proceedings of EMNLP, volume 4(4), page 275, Barcelona, Spain.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "The pagerank citation ranking: Bringing order to the web", "authors": [ { "first": "Lawrence", "middle": [], "last": "Page", "suffix": "" }, { "first": "Sergey", "middle": [], "last": "Brin", "suffix": "" }, { "first": "Rajeev", "middle": [], "last": "Motwani", "suffix": "" }, { "first": "Terry", "middle": [], "last": "Winograd", "suffix": "" } ], "year": 1999, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lawrence Page, Sergey Brin, Rajeev Motwani, and Terry Winograd. 1999. The pagerank citation rank- ing: Bringing order to the web. Technical Report.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Automatic text structuring and summarization. Information Processing and Management", "authors": [ { "first": "Gerard", "middle": [], "last": "Salton", "suffix": "" }, { "first": "Amit", "middle": [], "last": "Singhal", "suffix": "" }, { "first": "Mandar", "middle": [], "last": "Mitra", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Buckley", "suffix": "" } ], "year": 1997, "venue": "", "volume": "33", "issue": "", "pages": "193--207", "other_ids": {}, "num": null, "urls": [], "raw_text": "Gerard Salton, Amit Singhal, Mandar Mitra, and Chris Buckley. 1997. Automatic text structuring and sum- marization. Information Processing and Manage- ment, 33:193-207.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "BIUTEE: A modular open-source system for recognizing textual entailment", "authors": [ { "first": "Asher", "middle": [], "last": "Stern", "suffix": "" }, { "first": "Ido", "middle": [], "last": "Dagan", "suffix": "" } ], "year": 2012, "venue": "Proceedings of the ACL 2012 System Demonstrations", "volume": "", "issue": "", "pages": "73--78", "other_ids": {}, "num": null, "urls": [], "raw_text": "Asher Stern and Ido Dagan. 2012. BIUTEE: A mod- ular open-source system for recognizing textual en- tailment. In Proceedings of the ACL 2012 System Demonstrations, pages 73-78, Jeju, Korea.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Summarization by logic seg-mentation and text entailment", "authors": [ { "first": "Doina", "middle": [], "last": "Tatar", "suffix": "" }, { "first": "Emma", "middle": [ "Tamaianu" ], "last": "Morita", "suffix": "" }, { "first": "Andreea", "middle": [], "last": "Mihis", "suffix": "" }, { "first": "Dana", "middle": [], "last": "Lupsa", "suffix": "" } ], "year": 2008, "venue": "Conference on Intelligent Text Processing and Computational Linguistics (CICLing 08)", "volume": "", "issue": "", "pages": "15--26", "other_ids": {}, "num": null, "urls": [], "raw_text": "Doina Tatar, Emma Tamaianu Morita, Andreea Mihis, and Dana Lupsa. 2008. Summarization by logic seg-mentation and text entailment. In Conference on Intelligent Text Processing and Computational Lin- guistics (CICLing 08), pages 15-26, Haifa, Israel.", "links": null } }, "ref_entries": { "FIGREF0": { "text": "Outline of the proposed method. D 1\u00d7N . An example article is shown in", "uris": null, "num": null, "type_str": "figure" }, "FIGREF1": { "text": "The Entailment connectivity graph of the considered example with associated Score of each node shown.", "uris": null, "num": null, "type_str": "figure" }, "FIGREF2": { "text": "Minimum Vertex Cover vs. Greedy selection of sentences.", "uris": null, "num": null, "type_str": "figure" }, "TABREF0": { "type_str": "table", "html": null, "num": null, "content": "", "text": "The sentence array of article AP881113-0007 of cluster do106 in the DUC'02 dataset." }, "TABREF1": { "type_str": "table", "html": null, "num": null, "content": "
S1S2S3S4S5S6S7S8S9
S1 -000.04000.001 0.020.02
S2 0.02-0.010.040.060.0100.010.04
S3 00-0.0900000.04
S4 000-00000.01
S5 0000.04-00.010.010.04
S6 0000.040-000.02
S7 0000.040.060-0.020.27
S8 0000.04000.01-0.02
S9 0000.040000-
", "text": "" }, "TABREF2": { "type_str": "table", "html": null, "num": null, "content": "
IdConnScoreIdConnScore
S10.08S60.06
S20.19S70.39
S30.13S80.07
S40.01S90.04
S50.1
", "text": "The sentence entailment matrix of the example article." }, "TABREF4": { "type_str": "table", "html": null, "num": null, "content": "
MethodP (%) R (%) F1 (%)
TF-IDF7.49.68.4
LTT18.415.216.6
ATESC16.311.713.6
WMVC16.716.816.8
", "text": "ROUGE-1 results." }, "TABREF5": { "type_str": "table", "html": null, "num": null, "content": "", "text": "ROUGE-2 results." } } } }