{ "paper_id": "D07-1047", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T16:19:52.583276Z" }, "title": "Enhancing Single-document Summarization by Combining RankNet and Third-party Sources", "authors": [ { "first": "Krysta", "middle": [ "M" ], "last": "Svore", "suffix": "", "affiliation": {}, "email": "ksvore@microsoft.com" }, { "first": "Lucy", "middle": [], "last": "Vanderwende", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Christopher", "middle": [ "J C" ], "last": "Burges", "suffix": "", "affiliation": {}, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "We present a new approach to automatic summarization based on neural nets, called NetSum. We extract a set of features from each sentence that helps identify its importance in the document. We apply novel features based on news search query logs and Wikipedia entities. Using the RankNet learning algorithm, we train a pair-based sentence ranker to score every sentence in the document and identify the most important sentences. We apply our system to documents gathered from CNN.com, where each document includes highlights and an article. Our system significantly outperforms the standard baseline in the ROUGE-1 measure on over 70% of our document set.", "pdf_parse": { "paper_id": "D07-1047", "_pdf_hash": "", "abstract": [ { "text": "We present a new approach to automatic summarization based on neural nets, called NetSum. We extract a set of features from each sentence that helps identify its importance in the document. We apply novel features based on news search query logs and Wikipedia entities. Using the RankNet learning algorithm, we train a pair-based sentence ranker to score every sentence in the document and identify the most important sentences. We apply our system to documents gathered from CNN.com, where each document includes highlights and an article. Our system significantly outperforms the standard baseline in the ROUGE-1 measure on over 70% of our document set.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Automatic summarization was first studied almost 50 years ago by Luhn (Luhn, 1958) and has continued to be a steady subject of research. Automatic summarization refers to the creation of a shortened version of a document or cluster of documents by a machine, see (Mani, 2001) for details. The summary can be an abstraction or extraction. In an abstract summary, content from the original document may be paraphrased or generated, whereas in an extract summary, the content is preserved in its original form, i.e., sentences. Both summary types can involve sentence compression, but abstracts tend to be more condensed. In this paper, we focus on producing fully automated single-document extract summaries of newswire articles.", "cite_spans": [ { "start": 70, "end": 82, "text": "(Luhn, 1958)", "ref_id": "BIBREF23" }, { "start": 263, "end": 275, "text": "(Mani, 2001)", "ref_id": "BIBREF24" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "To create an extract, most automatic systems use linguistic and/or statistical methods to identify key words, phrases, and concepts in a sentence or across single or multiple documents. Each sentence is then assigned a score indicating the strength of presence of key words, phrases, and so on. Sentence scoring methods utilize both purely statistical and purely semantic features, for example as in Nenkova et al., 2006; Yih et al., 2007) .", "cite_spans": [ { "start": 400, "end": 421, "text": "Nenkova et al., 2006;", "ref_id": "BIBREF29" }, { "start": 422, "end": 439, "text": "Yih et al., 2007)", "ref_id": "BIBREF36" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Recently, machine learning techniques have been successfully applied to summarization. The methods include binary classifiers (Kupiec et al., 1995) , Markov models (Conroy et al., 2004) , Bayesian methods (Daum\u00e9 III and Marcu, 2005; Aone et al., 1998) , and heuristic methods to determine feature weights (Schiffman, 2002; Lin and Hovy, 2002) . Graph-based methods have also been employed (Erkan and Radev, 2004a; Erkan and Radev, 2004b; Mihalcea, 2005; Mihalcea and Tarau, 2005; Mihalcea and Radev, 2006) .", "cite_spans": [ { "start": 126, "end": 147, "text": "(Kupiec et al., 1995)", "ref_id": "BIBREF17" }, { "start": 164, "end": 185, "text": "(Conroy et al., 2004)", "ref_id": "BIBREF6" }, { "start": 205, "end": 232, "text": "(Daum\u00e9 III and Marcu, 2005;", "ref_id": "BIBREF8" }, { "start": 233, "end": 251, "text": "Aone et al., 1998)", "ref_id": "BIBREF1" }, { "start": 305, "end": 322, "text": "(Schiffman, 2002;", "ref_id": "BIBREF31" }, { "start": 323, "end": 342, "text": "Lin and Hovy, 2002)", "ref_id": "BIBREF20" }, { "start": 389, "end": 413, "text": "(Erkan and Radev, 2004a;", "ref_id": "BIBREF11" }, { "start": 414, "end": 437, "text": "Erkan and Radev, 2004b;", "ref_id": "BIBREF12" }, { "start": 438, "end": 453, "text": "Mihalcea, 2005;", "ref_id": "BIBREF27" }, { "start": 454, "end": 479, "text": "Mihalcea and Tarau, 2005;", "ref_id": "BIBREF26" }, { "start": 480, "end": 505, "text": "Mihalcea and Radev, 2006)", "ref_id": "BIBREF25" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In 2001-02, the Document Understanding Conference (DUC, 2001) , issued the task of creating a 100-word summary of a single news article. The best performing systems (Hirao et al., 2002; Lal and Ruger, 2002) used various learning and semantic-based methods, although no system could outperform the baseline with statistical significance (Nenkova, 2005) . After 2002, the single-document summarization task was dropped.", "cite_spans": [ { "start": 50, "end": 61, "text": "(DUC, 2001)", "ref_id": "BIBREF9" }, { "start": 165, "end": 185, "text": "(Hirao et al., 2002;", "ref_id": "BIBREF14" }, { "start": 186, "end": 206, "text": "Lal and Ruger, 2002)", "ref_id": "BIBREF18" }, { "start": 336, "end": 351, "text": "(Nenkova, 2005)", "ref_id": "BIBREF30" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In recent years, there has been a decline in studies on automatic single-document summarization, in part because the DUC task was dropped, and in part because the task of single-document extracts may be counterintuitively more difficult than multi-document summarization (Nenkova, 2005) . However, with the ever-growing internet and increased information access, we believe single-document summarization is essential to improve quick access to large quantities of information. Recently, CNN.com (CNN.com, 2007a) added \"Story Highlights\" to many news articles on its site to allow readers to quickly gather information on stories. These highlights give a brief overview of the article and appear as 3-4 related sentences in the form of bullet points rather than a summary paragraph, making them even easier to quickly scan.", "cite_spans": [ { "start": 271, "end": 286, "text": "(Nenkova, 2005)", "ref_id": "BIBREF30" }, { "start": 487, "end": 511, "text": "CNN.com (CNN.com, 2007a)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Our work is motivated by both the addition of highlights to an extremely visible and reputable online news source, as well as the inability of past single-document summarization systems to outperform the extremely strong baseline of choosing the first n sentences of a newswire article as the summary (Nenkova, 2005) . Although some recent systems indicate an improvement over the baseline (Mihalcea, 2005; Mihalcea and Tarau, 2005) , statistical significance has not been shown. We show that by using a neural network ranking algorithm and thirdparty datasets to enhance sentence features, our system, NetSum, can outperform the baseline with statistical significance.", "cite_spans": [ { "start": 301, "end": 316, "text": "(Nenkova, 2005)", "ref_id": "BIBREF30" }, { "start": 390, "end": 406, "text": "(Mihalcea, 2005;", "ref_id": "BIBREF27" }, { "start": 407, "end": 432, "text": "Mihalcea and Tarau, 2005)", "ref_id": "BIBREF26" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Our paper is organized as follows. Section 2 describes our two studies: summarization and highlight extraction. We describe our dataset in detail in Section 3. Our ranking system and feature vectors are outlined in Section 4. We present our evaluation measure in Section 5. Sections 6 and 7 report on our results on summarization and highlight extraction, respectively. We conclude in Section 8 and discuss future work in Section 9.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this paper, we focus on single-document summarization of newswire documents. Each document consists of three highlight sentences and the article text. Each highlight sentence is human-generated, but is based on the article. In Section 4 we discuss the process of matching a highlight to an article sentence. The output of our system consists of purely extracted sentences, where we do not perform any sentence compression or sentence generation. We leave such extensions for future work.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Our Task", "sec_num": "2" }, { "text": "We develop two separate problems based on our document set. First, can we extract three sentences that best \"match\" the highlights as a whole? In this task, we concatenate the three sentences produced by our system into a single summary or block, and similarly concatenate the three highlight sentences into a single summary or block. We then compare our system's block against the highlight block. Second, can we extract three sentences that best \"match\" the three highlights, such that ordering is preserved? In this task, we produce three sentences, where the first sentence is compared against the first highlight, the second sentence is compared against the second highlight, and the third sentence is compared against the third highlight. Credit is not given for producing three sentences that match the highlights, but are out of order. The second task considers ordering and compares sentences on an individual level, whereas the first task considers the three chosen sentences as a summary or block and disregards sentence order. In both tasks, we assume the title has been seen by the reader and will be listed above the highlights.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Our Task", "sec_num": "2" }, { "text": "Our data consists of 1365 news documents gathered from CNN.com (CNN.com, 2007a) . Each document was extracted by hand, where a maximum of 50 documents per day were collected. The documents were hand-collected on consecutive days during the month of February.", "cite_spans": [ { "start": 63, "end": 79, "text": "(CNN.com, 2007a)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Evaluation Corpus", "sec_num": "3" }, { "text": "Each document includes the title, timestamp, story highlights, and article text. The timestamp on articles ranges from December 2006 to February 2007, since articles remain posted on CNN.com for up to several months. The story highlights are human-generated from the article text. The number of story highlights is between 3-4. Since all articles include at least 3 story highlights, we consider only the task of extracting three highlights from each article.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation Corpus", "sec_num": "3" }, { "text": "Our goal is to extract three sentences from a single news document that best match various characteristics of the three document highlights. One way to identify the best sentences is to rank the sentences generally not harmful to humans, but the H5N1 virus has claimed at least 164 lives worldwide since it began ravaging Asian poultry in late 2003, according to the WHO. 13. The H5N1 strain had been confirmed in 15 of Nigeria's 36 states. 14. By September, when the last known case of the virus was found in poultry in a farm near Nigeria's biggest city of Lagos, 915,650 birds had been slaughtered nationwide by government veterinary teams under a plan in which the owners were promised compensation. 15. However, many Nigerian farmers have yet to receive compensation in the north of the country, and health officials fear that chicken deaths may be covered up by owners reluctant to slaughter their animals. 16. Since bird flu cases were first discovered in Nigeria last year, Cameroon, Djibouti, Niger, Ivory Coast, Sudan and Burkina Faso have also reported the H5N1 strain of bird flu in birds. 17. There are fears that it has spread even further than is known in Africa because monitoring is difficult on a poor continent with weak infrastructure. 18. With sub-Saharan Africa bearing the brunt of the AIDS epidemic, there is concern that millions of people with suppressed immune systems will be particularly vulnerable, especially in rural areas with little access to health facilities. 19. Many people keep chickens for food, even in densely populated urban areas. using a machine learning approach, for example as in (Hirao et al., 2002) . A train set is labeled such that the labels identify the best sentences. Then a set of features is extracted from each sentence in the train and test sets, and the train set is used to train the system. The system is then evaluated on the test set. The system learns from the train set the distribution of features for the best sentences and outputs a ranked list of sentences for each document. In this paper, we rank sentences using a neural network algorithm called RankNet (Burges et al., 2005) .", "cite_spans": [ { "start": 1628, "end": 1648, "text": "(Hirao et al., 2002)", "ref_id": "BIBREF14" }, { "start": 2128, "end": 2149, "text": "(Burges et al., 2005)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Description of Our System", "sec_num": "4" }, { "text": "From the labels and features for each sentence, we train a model that, when run on a test set of sentences, can infer the proper ranking of sentences in a document based on information gathered during training about sentence characteristics. To accomplish the ranking, we use RankNet (Burges et al., 2005) , a ranking algorithm based on neural networks.", "cite_spans": [ { "start": 284, "end": 305, "text": "(Burges et al., 2005)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "RankNet", "sec_num": "4.1" }, { "text": "RankNet is a pair-based neural network algorithm used to rank a set of inputs, in this case, the set of sentences in a given document. The system is trained on pairs of sentences (S i , S j ), such that S i should be ranked higher or equal to S j . Pairs are generated between sentences in a single document, not across documents. Each pair is determined from the input labels. Since our sentences are labeled using ROUGE (see Section 4.3), if the ROUGE score of S i is greater than the ROUGE score of S j , then (S i , S j ) is one input pair. The cost function for RankNet is the probabilistic cross-entropy cost function. Training is performed using a modified version of the back propagation algorithm for two layer nets (Le Cun et al., 1998) , which is based on optimizing the cost function by gradient descent. A similar method of training on sentence pairs in the context of multi-document summarization was recently shown in (Toutanova et al., 2007) .", "cite_spans": [ { "start": 725, "end": 746, "text": "(Le Cun et al., 1998)", "ref_id": "BIBREF19" }, { "start": 933, "end": 957, "text": "(Toutanova et al., 2007)", "ref_id": "BIBREF33" } ], "ref_spans": [], "eq_spans": [], "section": "RankNet", "sec_num": "4.1" }, { "text": "Our system, NetSum, is a two-layer neural net trained using RankNet. To speed up the performance of RankNet, we implement RankNet in the framework of LambdaRank (Burges et al., 2006) . For details, see (Burges et al., 2006; Burges et al., 2005) . We experiment with between 5 and 15 hidden nodes and with an error rate between 10 \u22122 and 10 \u22127 .", "cite_spans": [ { "start": 161, "end": 182, "text": "(Burges et al., 2006)", "ref_id": "BIBREF3" }, { "start": 202, "end": 223, "text": "(Burges et al., 2006;", "ref_id": "BIBREF3" }, { "start": 224, "end": 244, "text": "Burges et al., 2005)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "RankNet", "sec_num": "4.1" }, { "text": "We implement 4 versions of NetSum. The first version, NetSum(b), is trained for our first summarization problem (b indicates block). The pairs are generated using the maximum ROUGE scores l 1 (see Section 4.3). The other three rankers are trained to identify the sentence in the document that best matches highlight n. We train one ranker, NetSum(n), for each highlight n, for n = 1, 2, 3, resulting in three rankers. NetSum(n) is trained using pairs generated from the l 1,n ROUGE scores between sentence S i and highlight H n (see Section 4.3).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "RankNet", "sec_num": "4.1" }, { "text": "In this section, we describe how to determine which sentence in the document best matches a given highlight. Choosing three sentences most similar to the three highlights is very challenging since the highlights include content that has been gathered across sentences and even paragraphs, and furthermore include vocabulary that may not be present in the text. Jing showed, for 300 news articles, that 19% of human-generated summary sentences contain no matching article sentence (Jing, 2002) . In addition, only 42% of the summary sentences match the content of a single article sentence, where there are still semantic and syntactic transformations between the summary sentence and article sentence.. Since each highlight is human generated and does not exactly match any one sentence in the document, we must develop a method to identify how closely related a highlight is to a sentence. We use the ROUGE (Lin, 2004b) measure to score the similarity between an article sentence and a highlight sentence. We anticipate low ROUGE scores for both the baseline and NetSum due to the difficulty of finding a single sentence to match a highlight.", "cite_spans": [ { "start": 480, "end": 492, "text": "(Jing, 2002)", "ref_id": "BIBREF16" }, { "start": 908, "end": 920, "text": "(Lin, 2004b)", "ref_id": "BIBREF22" } ], "ref_spans": [], "eq_spans": [], "section": "Matching Extracted to Generated Sentences", "sec_num": "4.2" }, { "text": "Recall-Oriented Understudy for Gisting Evaluation (Lin, 2004b) , known as ROUGE, measures the quality of a model-generated summary or sentence by comparing it to a \"gold-standard\", typically humangenerated, summary or sentence. It has been shown that ROUGE is very effective for measuring both single-document summaries and single-document headlines (Lin, 2004a) . ROUGE-N is a N -gram recall between a model-generated summary and a reference summary. We use ROUGE-N , for N = 1, for labeling and evaluation of our model-generated highlights. 1 ROUGE-1 and ROUGE-2 have been shown to be statistically similar to human evaluations and can be used with a single reference summary (Lin, 2004a) . We have only one reference summary, the set of humangenerated highlights, per document. In our work, the reference summary can be a single highlight sentence or the highlights as a block. We calculate ROUGE-N as", "cite_spans": [ { "start": 50, "end": 62, "text": "(Lin, 2004b)", "ref_id": "BIBREF22" }, { "start": 350, "end": 362, "text": "(Lin, 2004a)", "ref_id": "BIBREF21" }, { "start": 678, "end": 690, "text": "(Lin, 2004a)", "ref_id": "BIBREF21" } ], "ref_spans": [], "eq_spans": [], "section": "ROUGE", "sec_num": "4.3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "gram j \u2208R\u2229S i Count(gram j ) gram j \u2208R Count(gram j ) ,", "eq_num": "(1)" } ], "section": "ROUGE", "sec_num": "4.3" }, { "text": "where R is the reference summary, S i is the modelgenerated summary, and N is the length of the Ngram gram j . 2 The numerator cannot excede the number of N -grams (non-unique) in R. We label each sentence S i by its ROUGE-1 score. For the first problem of matching the highlights as a block, we label each S i by l 1 , the maximum ROUGE-1 score between S i and each highlight H n , for n = 1, 2, 3, given by", "cite_spans": [ { "start": 111, "end": 112, "text": "2", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "ROUGE", "sec_num": "4.3" }, { "text": "l 1 = max n (R(S i , H n )).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "ROUGE", "sec_num": "4.3" }, { "text": "For the second problem of matching three sentences to the three highlights individually, we label each sentence S i by l 1,n , the ROUGE-1 score between S i and H n , given by l 1,n = R(S i , H n ). The ranker for highlight n, NetSum(n), is passed samples labeled using l 1,n .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "ROUGE", "sec_num": "4.3" }, { "text": "RankNet takes as input a set of samples, where each sample contains a label and feature vector. The labels were previously described in Section 4.3. In this section, we describe each feature in detail and motivate in part why each feature is chosen. We generate 10 features for each sentence S i in each document, listed in Table 1 . Each feature is chosen to identify characteristics of an article sentence that may match those of a highlight sentence. Some of the features such as position and N -gram frequencies are commonly used for scoring. Sentence scoring based on Symbol Feature Name", "cite_spans": [], "ref_spans": [ { "start": 324, "end": 331, "text": "Table 1", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Features", "sec_num": "4.4" }, { "text": "F (S i ) Is First Sentence P os(S i ) Sentence Position SB(S i ) SumBasic Score SB b (S i ) SumBasic Bigram Score Sim(S i ) Title Similarity Score N T (S i ) Average News Query Term Score N T + (S i ) News Query Term Sum Score N T r (S i ) Relative News Query Term Score W E(S i ) Average Wikipedia Entity Score W E + (S i )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Features", "sec_num": "4.4" }, { "text": "Wikipedia Entity Sum Score son, 1969; Alfonesca and Rodriguez, 2003; Mani, 2001) . We use variations on these features as well as a novel set of features based on third-party data. Typically, news articles are written such that the first sentence summarizes the article. Thus, we include a binary feature F (S i ) that equals 1 if S i is the first sentence of the document: F (S i ) = \u03b4 i,1 , where \u03b4 is the Kronecker delta function. This feature is used only for NetSum(b) and NetSum(1).", "cite_spans": [ { "start": 27, "end": 37, "text": "son, 1969;", "ref_id": null }, { "start": 38, "end": 68, "text": "Alfonesca and Rodriguez, 2003;", "ref_id": "BIBREF0" }, { "start": 69, "end": 80, "text": "Mani, 2001)", "ref_id": "BIBREF24" } ], "ref_spans": [], "eq_spans": [], "section": "Features", "sec_num": "4.4" }, { "text": "We include sentence position since we found in empirical studies that the sentence to best match highlight H 1 is on average 10% down the article, the sentence to best match H 2 is on average 20% down the article, and the sentence to best match H 3 is 31% down the article. 3 We calculate the position of S i in document D as", "cite_spans": [ { "start": 274, "end": 275, "text": "3", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Features", "sec_num": "4.4" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "P os(S i ) = i \u2113 ,", "eq_num": "(2)" } ], "section": "Features", "sec_num": "4.4" }, { "text": "where i = {1, . . . , \u2113} is the sentence number and \u2113 is the number of sentences in D.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Features", "sec_num": "4.4" }, { "text": "We include the SumBasic score (Nenkova et al., 2006) of a sentence to estimate the importance of a sentence based on word frequency. We calculate the SumBasic score of S i in document D as", "cite_spans": [ { "start": 30, "end": 52, "text": "(Nenkova et al., 2006)", "ref_id": "BIBREF29" } ], "ref_spans": [], "eq_spans": [], "section": "Features", "sec_num": "4.4" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "SB(S i ) = w\u2208S i p(w) |S i | ,", "eq_num": "(3)" } ], "section": "Features", "sec_num": "4.4" }, { "text": "where p(w) is the probability of word w and |S i | is the number of words in sentence S i . We calculate p(w) as p(w) = Count(w)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Features", "sec_num": "4.4" }, { "text": ", where Count(w) is the number of times word w appears in document D and |D| is the number of words in document D. Note that the score of a sentence is the average probability of a word in the sentence.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "|D|", "sec_num": null }, { "text": "We also include the SumBasic score over bigrams, where w in Eq 3 is replaced by bigrams and we normalize by the number of bigrams in S i .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "|D|", "sec_num": null }, { "text": "We compute the similarity of a sentence S i in document D with the title T of D as the relative probability of title terms t \u2208 T in S i as", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "|D|", "sec_num": null }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "Sim(S i ) = t\u2208S i p(t) |S i | ,", "eq_num": "(4)" } ], "section": "|D|", "sec_num": null }, { "text": "where", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "|D|", "sec_num": null }, { "text": "p(t) = Count(t) |T |", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "|D|", "sec_num": null }, { "text": "is the number of times term t appears in T over the number of terms in T .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "|D|", "sec_num": null }, { "text": "The remaining features we use are based on thirdparty data sources. Previously, third-party sources such as WordNet (Fellbaum, 1998) , the web (Jagalamudi et al., 2006) , or click-through data (Sun et al., 2005 ) have been used as features. We propose using news query logs and Wikipedia entities to enhance features. We base several features on query terms frequently issued to Microsoft's news search engine http://search.live.com/news, and entities 4 found in the online open-source encyclopedia Wikipedia (Wikipedia.org, 2007) . If a query term or Wikipedia entity appears frequently in a CNN document, then we assume highlights should include that term or entity since it is important on both the document and global level. Sentences containing query terms or Wikipedia entities therefore contain important content. We confirm the importance of these third-party features in Section 7.", "cite_spans": [ { "start": 116, "end": 132, "text": "(Fellbaum, 1998)", "ref_id": null }, { "start": 143, "end": 168, "text": "(Jagalamudi et al., 2006)", "ref_id": "BIBREF15" }, { "start": 193, "end": 210, "text": "(Sun et al., 2005", "ref_id": "BIBREF32" }, { "start": 509, "end": 530, "text": "(Wikipedia.org, 2007)", "ref_id": "BIBREF35" } ], "ref_spans": [], "eq_spans": [], "section": "|D|", "sec_num": null }, { "text": "We collected several hundred of the most frequently queried terms in February 2007 from the news query logs. We took the daily top 200 terms for 10 days. Our hypothesis is that a sentence with a higher number of news query terms should be a better candidate highlight. We calculate the average probability of news query terms q in S i as", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "|D|", "sec_num": null }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "N T (S i ) = q\u2208S i p(q) |q \u2208 S i | ,", "eq_num": "(5)" } ], "section": "|D|", "sec_num": null }, { "text": "where p(q) is the probability of a news term q and |q \u2208 S i | is the number of news terms in S i . p(q) =", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "|D|", "sec_num": null }, { "text": "Count(q)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "|D|", "sec_num": null }, { "text": "|q\u2208D| , where Count(q) is the number of times term q appears in D and |q \u2208 D| is the number of news query terms in D.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "|D|", "sec_num": null }, { "text": "We also include the sum of news query terms in S i , given by N T + (S i ) = q\u2208S i p(q), and the relative probability of news query terms in S i , given by", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "|D|", "sec_num": null }, { "text": "N T r (S i ) = q\u2208S i p(q) |S i | .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "|D|", "sec_num": null }, { "text": "We perform term disambiguation on each document using an entity extractor (Cucerzan, 2007) . Terms are disambiguated to a Wikipedia entity only if they match a surface form in Wikipedia. Wikipedia surface forms are terms that disambiguate to a Wikipedia entity and link to a Wikipedia page with the entity as its title. For example, \"WHO\" and \"World Health Org.\" both refer to the World Health Organization, and should disambiguate to the entity \"World Health Organization\". Sentences in CNN document D that contain Wikipedia entities that frequently appear in CNN document D are considered important. We calculate the average Wikipedia entity score for", "cite_spans": [ { "start": 74, "end": 90, "text": "(Cucerzan, 2007)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "|D|", "sec_num": null }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "S i as W E(S i ) = e\u2208S i p(e) |e \u2208 S i | ,", "eq_num": "(6)" } ], "section": "|D|", "sec_num": null }, { "text": "where p(e) is the probability of entity e, given by p(e) = Count(e) |e\u2208D| , where Count(e) is the number of times entity e appears in CNN document D and |e \u2208 D| is the total number of entities in CNN document D.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "|D|", "sec_num": null }, { "text": "We also include the sum of Wikipedia entities, given by W E + (S i ) = e\u2208S i p(e).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "|D|", "sec_num": null }, { "text": "Note that all features except position features are a variant of SumBasic over different term sets. All features are computed over sentences where every word has been lowercased and punctuation has been removed after sentence breaking. We examined using stemming, but found stemming to be ineffective.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "|D|", "sec_num": null }, { "text": "We evaluate the performance of NetSum using ROUGE and by comparing against a baseline system. For the first summarization task, we compare against the baseline of choosing the first three sentences as the block summary. For the second high-lights task, we compare NetSum(n) against the baseline of choosing sentence n (to match highlight n). Both tasks are novel in attempting to match highlights rather than a human-generated summary.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation", "sec_num": "5" }, { "text": "We consider ROUGE-1 to be the measure of importance and thus train our model on ROUGE-1 (to optimize ROUGE-1 scores) and likewise evaluate our system on ROUGE-1. We list ROUGE-2 scores for completeness, but do not expect them to be substantially better than the baseline since we did not directly optimize for ROUGE-2. 5 For every document in our corpus, we compare NetSum's output with the baseline output by computing ROUGE-1 and ROUGE-2 between the highlight block and NetSum and between the highlight block and the block of sentences. Similarly, for each highlight, we compute ROUGE-1 and ROUGE-2 between highlight n and NetSum(n) and between highlight n and sentence n, for n = 1, 2, 3. For each task, we calculate the average ROUGE-1 and ROUGE-2 scores of NetSum and of the baseline. We also report the percent of documents where the ROUGE-1 score of NetSum is equal to or better than the ROUGE-1 score of the baseline.", "cite_spans": [ { "start": 319, "end": 320, "text": "5", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Evaluation", "sec_num": "5" }, { "text": "We perform all experiments using five-fold crossvalidation on our dataset of 1365 documents. We divide our corpus into five random sets and train on three combined sets, validate on one set, and test on the remaining set. We repeat this procedure for every combination of train, validation, and test sets. Our results are the micro-averaged results on the five test sets. For all experiments, Table 3 lists the statistical tests performed and the significance of performance differences between NetSum and the baseline at 95% confidence.", "cite_spans": [], "ref_spans": [ { "start": 393, "end": 400, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "Evaluation", "sec_num": "5" }, { "text": "We first find three sentences that, as a block, best match the three highlights as a block. NetSum(b) produces a ranked list of sentences for each document. We create a block from the top 3 ranked sentences. The baseline is the block of the first 3 sentences of the document. A similar baseline outper-System Av. ROUGE-1 Av. ROUGE-2 Baseline 0.4642 \u00b1 0.0084 0.1726 \u00b1 0.0064 NetSum(b) 0.4956 \u00b1 0.0075 0.1775 \u00b1 0.0066 Table 2 : Results on summarization task with standard error at 95% confidence. Bold indicates significance under paired tests. Table 3 : Paired tests for statistical significance at 95% confidence between baseline and NetSum performance; 1: McNemar, 2: Paired t-test, 3: Wilcoxon signed-rank. \"x\" indicates pass, \"o\" indicates fail. Since our studies are pair-wise, tests listed here are more accurate than error bars reported in Tables 2-5. forms all previous systems for news article summarization (Nenkova, 2005) and has been used in the DUC workshops (DUC, 2001 ).", "cite_spans": [ { "start": 916, "end": 931, "text": "(Nenkova, 2005)", "ref_id": "BIBREF30" }, { "start": 971, "end": 981, "text": "(DUC, 2001", "ref_id": "BIBREF9" } ], "ref_spans": [ { "start": 416, "end": 423, "text": "Table 2", "ref_id": null }, { "start": 543, "end": 550, "text": "Table 3", "ref_id": null }, { "start": 846, "end": 857, "text": "Tables 2-5.", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Results: Summarization", "sec_num": "6" }, { "text": "For each block produced by NetSum(b) and the baseline, we compute the ROUGE-1 and ROUGE-2 scores of the block against the set of highlights as a block. For 73.26% of documents, NetSum(b) produces a block with a ROUGE-1 score that is equal to or better than the baseline score. The two systems produce blocks of equal ROUGE-1 score for 24.69% of documents. Under ROUGE-2, NetSum(b) performs equal to or better than the baseline on 73.19% of documents and equal to the baseline on 40.51% of documents. Table 2 shows the average ROUGE-1 and ROUGE-2 scores obtained with NetSum(b) and the baseline. NetSum(b) produces a higher quality block on average for ROUGE-1. Table 4 lists the sentences in the block produced by NetSum(b) and the baseline block, for the articles shown in Figure 1 . The NetSum(b) summary achieves a ROUGE-1 score of 0.52, while the baseline summary scores only 0.36.", "cite_spans": [], "ref_spans": [ { "start": 500, "end": 507, "text": "Table 2", "ref_id": null }, { "start": 661, "end": 668, "text": "Table 4", "ref_id": null }, { "start": 774, "end": 782, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "ROUGE-1 ROUGE-", "sec_num": null }, { "text": "Sent. # ROUGE-1 Baseline S 1 , S 2 , S 3 0.36 NetSum(b) S 1 , S 7 , S 15 0.52 Table 4 : Block results for the block produced by NetSum(b) and the baseline block for the example article. ROUGE-1 scores computed against the highlights as a block are listed.", "cite_spans": [], "ref_spans": [ { "start": 78, "end": 85, "text": "Table 4", "ref_id": null } ], "eq_spans": [], "section": "System", "sec_num": null }, { "text": "Our second task is to extract three sentences from a document that best match the three highlights in order. To accomplish this, we train NetSum(n) for each highlight n = 1, 2, 3. We compare NetSum(n) with the baseline of picking the nth sentence of the document. We perform five-fold cross-validation across our 1365 documents. Our results are reported for the micro-average of the test results. For each highlight n produced by both NetSum(n) and the baseline, we compute the ROUGE-1 and ROUGE-2 scores against the nth highlight. We expect that beating the baseline for n = 1 is a more difficult task than for n = 2 or 3 since the first sentence of a news article typically acts as a summary of the article and since we expect the first highlight to summarize the article. NetSum(1), however, produces a sentence with a ROUGE-1 score that is equal to or better than the baseline score for 93.26% of documents. The two systems produce sentences of equal ROUGE-1 scores for 82.84% of documents. Under ROUGE-2, NetSum(1) performs equal to or better than the baseline on 94.21% of documents. Table 5 shows the average ROUGE-1 and ROUGE-2 scores obtained with NetSum(1) and the baseline. NetSum(1) produces a higher quality sentence on average under ROUGE-1.", "cite_spans": [], "ref_spans": [ { "start": 1090, "end": 1097, "text": "Table 5", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Results: Highlights", "sec_num": "7" }, { "text": "The content of highlights 2 and 3 is typically from later in the document, so we expect the baseline to not perform as well in these tasks. NetSum(2) outperforms the baseline since it is able to identify sentences from further down the document as important. For 77.73% of documents, NetSum(2) produces a sentence with a ROUGE-1 score that is equal to or better than the score for the baseline. The two systems produce sentences of equal ROUGE-1 score for 33.92% of documents. Under ROUGE-2, Net-Sum(2) performs equal to or better than the baseline System Av. ROUGE-1 Av. ROUGE-2 Baseline(1) 0.4343 \u00b1 0.0138 0.1833 \u00b1 0.0095 NetSum(1) 0.4478 \u00b1 0.0133 0.1857 \u00b1 0.0085 Baseline(2) 0.2451 \u00b1 0.0128 0.0814 \u00b1 0.0106 NetSum(2) 0.3036 \u00b1 0.0117 0.0877 \u00b1 0.0107 Baseline(3) 0.1707 \u00b1 0.0103 0.0412 \u00b1 0.0069 NetSum(3) 0.2603 \u00b1 0.0133 0.0615 \u00b1 0.0075 Table 6 : Highlight results for highlight n produced by NetSum(n) and highlight n produced by the baseline for the example article. ROUGE-1 scores computed against highlight n are listed.", "cite_spans": [], "ref_spans": [ { "start": 838, "end": 845, "text": "Table 6", "ref_id": null } ], "eq_spans": [], "section": "Results: Highlights", "sec_num": "7" }, { "text": "84.84% of the time. For 81.09% of documents, Net-Sum(3) produces a sentence with a ROUGE-1 score that is equal to or better than the score for the baseline. The two systems produce sentences of equal ROUGE-1 score for 28.45% of documents. Under ROUGE-2, NetSum(3) performs equal to or better than the baseline 89.91% of the time. Table 5 shows the average ROUGE-1 and ROUGE-2 scores obtained for NetSum(2), Net-Sum(3), and the baseline. Both NetSum(2) and Net-Sum(3) produce a higher quality sentence on average under both measures. Table 6 gives highlights produced by NetSum(n) and the highlights produced by the baseline, for the article shown in Figure 1 . The NetSum(n) highlights produce ROUGE-1 scores equal to or higher than the baseline ROUGE-1 scores.", "cite_spans": [], "ref_spans": [ { "start": 330, "end": 337, "text": "Table 5", "ref_id": "TABREF2" }, { "start": 533, "end": 540, "text": "Table 6", "ref_id": null }, { "start": 650, "end": 658, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Results: Highlights", "sec_num": "7" }, { "text": "In feature ablation studies, we confirmed that the inclusion of news-based and Wikipedia-based features improves NetSum's peformance. For example, we removed all news-based and Wikipedia-based features in NetSum(3). The resulting performance moderately declined. Under ROUGE-1, the baseline produced a better highlight on 22.34% of documents, versus only 18.91% when using third-party features. Similarly, NetSum(3) produced a summary of equal or better ROUGE-1 score on only 77.66% of documents, compared to 81.09% of documents when using third-party features. In addition, the average ROUGE-1 score dropped to 0.2182 and the average ROUGE-2 score dropped to 0.0448. The performance of NetSum with third-party features over NetSum without third-party features is statistically significant at 95% confidence. However, NetSum still outperforms the baseline without thirdparty features, leading us to conclude that RankNet and simple position and term frequency features contribute the maximum performance gains, but increased ROUGE-1 and ROUGE-2 scores are a clear benefit of third-party features.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results: Highlights", "sec_num": "7" }, { "text": "We have presented a novel approach to automatic single-document summarization based on neural networks, called NetSum. Our work is the first to use both neural networks for summarization and third-party datasets for features, using Wikipedia and news query logs. We have evaluated our system on two novel tasks: 1) producing a block of highlights and 2) producing three ordered highlight sentences. Our experiments were run on previously unstudied data gathered from CNN.com. Our system shows remarkable performance over the baseline of choosing the first n sentences of the document, where the performance difference is statistically significant under ROUGE-1.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "8" }, { "text": "An immediate future direction is to further explore feature selection. We found third-party features beneficial to the performance of NetSum and such sources can be mined further. In addition, feature selection for each NetSum system could be performed separately since, for example, highlight 1 has different characteristics than highlight 2.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Future Work", "sec_num": "9" }, { "text": "In our experiments, ROUGE scores are fairly low because a highlight rarely matches the content of a single sentence. To improve NetSum's performance, we must consider extracting content across sentence boundaries. Such work requires a system to produce abstract summaries. We hope to incorporate sentence simplification and sentence splicing and merging in a future version of NetSum.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Future Work", "sec_num": "9" }, { "text": "Another future direction is the identification of \"hard\" and \"easy\" inputs. Although we report average ROUGE scores, such measures can be misleading since some highlights are simple to match and some are much more difficult. A better system evaluation measure would incorporate the difficulty of the input and weight reported results accordingly.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Future Work", "sec_num": "9" }, { "text": "We use an implementation of ROUGE that does not perform stemming or stopword removal.2 ROUGE is typically used when the length of the reference summary is equal to length of the model-generated summary.Our reference summary and model-generated summary are different lengths, so there is a slight bias toward longer sentences.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Though this is not always the case, as the sentence to match H2 precedes that to match H1 in 22.03% of documents, and the sentence to match H3 precedes that to match H2 in 29.32% of and precedes that to match H1 in 28.81% of documents.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "We define an entity as a title of a Wikipedia page.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "NetSum can directly optimize for any measure by training on it, such as training on ROUGE-2 or on a weighted sum of ROUGE-1 and ROUGE-2 to optimize both. Thus, ROUGE-2 scores could be further improved. We leave such studies for future work.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Description of the uam system for generating very short summaries at DUC-2003", "authors": [ { "first": "E", "middle": [], "last": "Alfonesca", "suffix": "" }, { "first": "P", "middle": [], "last": "Rodriguez", "suffix": "" } ], "year": 2003, "venue": "DUC 2003: Document Understanding Conference", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "E. Alfonesca and P. Rodriguez. 2003. Description of the uam system for generating very short summaries at DUC-2003. In DUC 2003: Document Under- standing Conference, May 31-June 1, 2003, Edmon- ton, Canada.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Trainable scalable summarization using robust nlp and machine learning", "authors": [ { "first": "C", "middle": [], "last": "Aone", "suffix": "" }, { "first": "M", "middle": [], "last": "Okurowski", "suffix": "" }, { "first": "J", "middle": [], "last": "Gorlinsky", "suffix": "" } ], "year": 1998, "venue": "Proceedings of the 17th COLING and 36th ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "C. Aone, M. Okurowski, and J. Gorlinsky. 1998. Train- able scalable summarization using robust nlp and ma- chine learning. In Proceedings of the 17th COLING and 36th ACL.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Learning to Rank using Gradient Descent", "authors": [ { "first": "C", "middle": [ "J C" ], "last": "Burges", "suffix": "" }, { "first": "T", "middle": [], "last": "Shaked", "suffix": "" }, { "first": "E", "middle": [], "last": "Renshaw", "suffix": "" }, { "first": "A", "middle": [], "last": "Lazier", "suffix": "" }, { "first": "M", "middle": [], "last": "Deeds", "suffix": "" }, { "first": "N", "middle": [], "last": "Hamilton", "suffix": "" }, { "first": "G", "middle": [], "last": "Hullender", "suffix": "" } ], "year": 2005, "venue": "", "volume": "", "issue": "", "pages": "89--96", "other_ids": {}, "num": null, "urls": [], "raw_text": "C.J.C. Burges, T. Shaked, E. Renshaw, A. Lazier, M. Deeds, N. Hamilton, and G. Hullender. 2005. Learning to Rank using Gradient Descent. In Luc De Raedt and Stefan Wrobel, editors, ICML, pages 89-96. ACM.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Learning to rank with nonsmooth cost functions", "authors": [ { "first": "C", "middle": [ "J C" ], "last": "Burges", "suffix": "" }, { "first": "R", "middle": [], "last": "Ragno", "suffix": "" }, { "first": "Q", "middle": [], "last": "Le", "suffix": "" } ], "year": 2006, "venue": "NIPS 2006: Neural Information Processing Systems", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "C.J.C. Burges, R. Ragno, and Q. Le. 2006. Learning to rank with nonsmooth cost functions. In NIPS 2006: Neural Information Processing Systems, December 4- 7, 2006, Vancouver, CA.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Nigeria reports first human death from bird flu", "authors": [ { "first": "", "middle": [], "last": "Cnn", "suffix": "" }, { "first": "", "middle": [], "last": "Com", "suffix": "" } ], "year": 2007, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "CNN.com. 2007b. Nigeria reports first human death from bird flu.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Left-brain/right-brain multi-document summarization", "authors": [ { "first": "J", "middle": [], "last": "Conroy", "suffix": "" }, { "first": "J", "middle": [], "last": "Schlesinger", "suffix": "" }, { "first": "J", "middle": [], "last": "Goldstein", "suffix": "" }, { "first": "D", "middle": [], "last": "O'leary", "suffix": "" } ], "year": 2004, "venue": "DUC 2004: Document Understanding Workshop", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "J. Conroy, J. Schlesinger, J. Goldstein, and D. O'Leary. 2004. Left-brain/right-brain multi-document summa- rization. In DUC 2004: Document Understanding Workshop, May 6-7, 2004, Boston, MA, USA.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Large scale named entity disambiguation based on wikipedia data", "authors": [ { "first": "S", "middle": [], "last": "Cucerzan", "suffix": "" } ], "year": 2007, "venue": "EMNLP 2007: Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "S. Cucerzan. 2007. Large scale named entity disam- biguation based on wikipedia data. In EMNLP 2007: Empirical Methods in Natural Language Processing, June 28-30, 2007, Prague, Czech Republic.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Bayesian multidocument summarization at mse", "authors": [ { "first": "H", "middle": [], "last": "Daum\u00e9", "suffix": "" }, { "first": "D", "middle": [], "last": "Marcu", "suffix": "" } ], "year": 2005, "venue": "Proceedings of MSE", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "H. Daum\u00e9 III and D. Marcu. 2005. Bayesian multi- document summarization at mse. In Proceedings of MSE.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Document understanding conferences", "authors": [ { "first": "", "middle": [], "last": "Duc", "suffix": "" } ], "year": 2001, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "DUC. 2001. Document understanding conferences. http://www-nlpir.nist.gov/projects/duc/index.html.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "New methods in automatic extracting", "authors": [ { "first": "H", "middle": [ "P" ], "last": "Edmundson", "suffix": "" } ], "year": 1969, "venue": "Journal for the Association of Computing Machinery", "volume": "16", "issue": "", "pages": "159--165", "other_ids": {}, "num": null, "urls": [], "raw_text": "H.P. Edmundson. 1969. New methods in automatic ex- tracting. Journal for the Association of Computing Machinery, 16:159-165.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Lexpagerank: Prestige in multi-document text summarization", "authors": [ { "first": "G", "middle": [], "last": "Erkan", "suffix": "" }, { "first": "D", "middle": [ "R" ], "last": "Radev", "suffix": "" } ], "year": 2004, "venue": "EMNLP 2004: Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "G. Erkan and D. R. Radev. 2004a. Lexpagerank: Prestige in multi-document text summarization. In EMNLP 2004: Empirical Methods in Natural Lan- guage Processing, 2004, Barcelona, Spain.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Lexrank: Graphbased centrality as salience in text summarization", "authors": [ { "first": "G", "middle": [], "last": "Erkan", "suffix": "" }, { "first": "D", "middle": [ "R" ], "last": "Radev", "suffix": "" } ], "year": 2004, "venue": "Journal of Artificial Intelligence Research", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "G. Erkan and D. R. Radev. 2004b. Lexrank: Graph- based centrality as salience in text summarization. Journal of Artificial Intelligence Research (JAIR), 22.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "WordNet: An Electronic Lexical Database", "authors": [], "year": 1998, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "C. Fellbaum, editor. 1998. WordNet: An Electronic Lex- ical Database. MIT Press, Cambridge, MA.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Ntt's text summarization system for DUC-2002", "authors": [ { "first": "T", "middle": [], "last": "Hirao", "suffix": "" }, { "first": "Y", "middle": [], "last": "Sasaki", "suffix": "" }, { "first": "H", "middle": [], "last": "Isozaki", "suffix": "" }, { "first": "E", "middle": [], "last": "Maeda", "suffix": "" } ], "year": 2002, "venue": "DUC 2002: Workshop on Text Summarization", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "T. Hirao, Y. Sasaki, H. Isozaki, and E. Maeda. 2002. Ntt's text summarization system for DUC-2002. In DUC 2002: Workshop on Text Summarization, July 11-12, 2002, Philadelphia, PA, USA.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Query independent sentence scoring approach to DUC", "authors": [ { "first": "J", "middle": [], "last": "Jagalamudi", "suffix": "" }, { "first": "P", "middle": [], "last": "Pingali", "suffix": "" }, { "first": "V", "middle": [], "last": "Varma", "suffix": "" } ], "year": 2006, "venue": "DUC 2006: Document Understanding Conference", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "J. Jagalamudi, P. Pingali, and V. Varma. 2006. Query independent sentence scoring approach to DUC 2006. In DUC 2006: Document Understanding Conference, June 8-9, 2006, Brooklyn, NY, USA.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Using hidden markov modeling to decompose human-written summaries", "authors": [ { "first": "H", "middle": [], "last": "Jing", "suffix": "" } ], "year": 2002, "venue": "Computational Linguistics", "volume": "4", "issue": "28", "pages": "527--543", "other_ids": {}, "num": null, "urls": [], "raw_text": "H. Jing. 2002. Using hidden markov modeling to de- compose human-written summaries. Computational Linguistics, 4(28):527-543.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "A trainable document summarizer. Research and Development in Information Retrieval", "authors": [ { "first": "J", "middle": [], "last": "Kupiec", "suffix": "" }, { "first": "J", "middle": [], "last": "Pererson", "suffix": "" }, { "first": "F", "middle": [], "last": "Chen", "suffix": "" } ], "year": 1995, "venue": "", "volume": "", "issue": "", "pages": "68--73", "other_ids": {}, "num": null, "urls": [], "raw_text": "J. Kupiec, J. Pererson, and F. Chen. 1995. A trainable document summarizer. Research and Development in Information Retrieval, pages 68-73.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Extract-based summarization with simplification", "authors": [ { "first": "P", "middle": [], "last": "Lal", "suffix": "" }, { "first": "S", "middle": [], "last": "Ruger", "suffix": "" } ], "year": 2002, "venue": "DUC 2002: Workshop on Text Summarization", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "P. Lal and S. Ruger. 2002. Extract-based summarization with simplification. In DUC 2002: Workshop on Text Summarization, July 11-12, 2002, Philadelphia, PA, USA.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Efficient backprop", "authors": [ { "first": "Y", "middle": [], "last": "Le Cun", "suffix": "" }, { "first": "L", "middle": [], "last": "Bottou", "suffix": "" }, { "first": "G", "middle": [ "B" ], "last": "Orr", "suffix": "" }, { "first": "K", "middle": [ "R" ], "last": "M\u00fcller", "suffix": "" } ], "year": 1998, "venue": "Neural Networks, Tricks of the Trade, Lecture Notes in Computer Science LNCS 1524", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Y. Le Cun, L. Bottou, G.B. Orr, and K.R. M\u00fcller. 1998. Efficient backprop. In Neural Networks, Tricks of the Trade, Lecture Notes in Computer Science LNCS 1524. Springer Verlag.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Automated multi-document summarization in neats", "authors": [ { "first": "C", "middle": [ "Y" ], "last": "Lin", "suffix": "" }, { "first": "E", "middle": [], "last": "Hovy", "suffix": "" } ], "year": 2002, "venue": "Proceedings of the Human Language Technology Conference (HLT2002)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "C.Y. Lin and E. Hovy. 2002. Automated multi-document summarization in neats. In Proceedings of the Human Language Technology Conference (HLT2002).", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Looking for a few good metrics: Automatic summarization evaluation -how many samples are enough?", "authors": [ { "first": "C", "middle": [ "Y" ], "last": "Lin", "suffix": "" } ], "year": 2004, "venue": "Proceedings of the NTCIR Workshop", "volume": "4", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "C.Y. Lin. 2004a. Looking for a few good metrics: Auto- matic summarization evaluation -how many samples are enough? In Proceedings of the NTCIR Workshop 4, June 2-4, 2004, Tokyo, Japan.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Rouge: A package for automatic evaluation of summaries", "authors": [ { "first": "C", "middle": [ "Y" ], "last": "Lin", "suffix": "" } ], "year": 2004, "venue": "WAS 2004: Proceedings of the Workshop on Text Summarization Branches Out", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "C.Y. Lin. 2004b. Rouge: A package for automatic evalu- ation of summaries. In WAS 2004: Proceedings of the Workshop on Text Summarization Branches Out, July 25-26, 2004, Barcelona, Spain.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "The automatic creation of literature abstracts", "authors": [ { "first": "H", "middle": [], "last": "Luhn", "suffix": "" } ], "year": 1958, "venue": "IBM Journal of Research and Development", "volume": "2", "issue": "2", "pages": "159--165", "other_ids": {}, "num": null, "urls": [], "raw_text": "H. Luhn. 1958. The automatic creation of literature ab- stracts. IBM Journal of Research and Development, 2(2):159-165.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Automatic Summarization", "authors": [ { "first": "I", "middle": [], "last": "Mani", "suffix": "" } ], "year": 2001, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "I. Mani. 2001. Automatic Summarization. John Ben- jamins Pub. Co.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Textgraphs: Graph-based methods for NLP", "authors": [ { "first": "R", "middle": [], "last": "Mihalcea", "suffix": "" }, { "first": "D", "middle": [ "R" ], "last": "Radev", "suffix": "" } ], "year": 2006, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "R. Mihalcea and D. R. Radev, editors. 2006. Textgraphs: Graph-based methods for NLP. New York City, NY.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "An algorithm for language independent single and multiple document summarization", "authors": [ { "first": "R", "middle": [], "last": "Mihalcea", "suffix": "" }, { "first": "P", "middle": [], "last": "Tarau", "suffix": "" } ], "year": 2005, "venue": "Proceedings of the International Joint Conference on Natural Language Processing (IJC-NLP)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "R. Mihalcea and P. Tarau. 2005. An algorithm for lan- guage independent single and multiple document sum- marization. In Proceedings of the International Joint Conference on Natural Language Processing (IJC- NLP), October, 2005, Korea.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Language independent extractive summarization", "authors": [ { "first": "R", "middle": [], "last": "Mihalcea", "suffix": "" } ], "year": 2005, "venue": "ACL 2005: Proceedings of the 43rd", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "R. Mihalcea. 2005. Language independent extractive summarization. In ACL 2005: Proceedings of the 43rd", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Annual Meeting of the Association for Computational Linguistics", "authors": [], "year": 2005, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Annual Meeting of the Association for Computational Linguistics, June, 2005, Ann Arbor, MI, USA.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "A compositional context sensitive multi-document summarizer: exploring the factors that influence summarization", "authors": [ { "first": "A", "middle": [], "last": "Nenkova", "suffix": "" }, { "first": "L", "middle": [], "last": "Vanderwende", "suffix": "" }, { "first": "K", "middle": [], "last": "Mckeown", "suffix": "" } ], "year": 2006, "venue": "SIGIR", "volume": "", "issue": "", "pages": "573--580", "other_ids": {}, "num": null, "urls": [], "raw_text": "A. Nenkova, L. Vanderwende, and K. McKeown. 2006. A compositional context sensitive multi-document summarizer: exploring the factors that influence sum- marization. In E. N. Efthimiadis, S. T. Dumais, D. Hawking, and K. J\u00e4rvelin, editors, SIGIR, pages 573-580. ACM.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "Automatic text summarization of newswire: Lessons learned from the document understanding conference", "authors": [ { "first": "A", "middle": [], "last": "Nenkova", "suffix": "" } ], "year": 2005, "venue": "Proceedings of the 20th National Conference on Artificial Intelligence (AAAI 2005)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "A. Nenkova. 2005. Automatic text summarization of newswire: Lessons learned from the document un- derstanding conference. In Proceedings of the 20th National Conference on Artificial Intelligence (AAAI 2005), Pittsburgh, PA.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "Building a resource for evaluating the importance of sentences", "authors": [ { "first": "B", "middle": [], "last": "Schiffman", "suffix": "" } ], "year": 2002, "venue": "Proceedings of the Third International Conference on Language Resources and Evaluation (LREC)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "B. Schiffman. 2002. Building a resource for evaluat- ing the importance of sentences. In Proceedings of the Third International Conference on Language Re- sources and Evaluation (LREC).", "links": null }, "BIBREF32": { "ref_id": "b32", "title": "Web-page summarization using click-through data", "authors": [ { "first": "J", "middle": [ "T" ], "last": "Sun", "suffix": "" }, { "first": "D", "middle": [], "last": "Shen", "suffix": "" }, { "first": "H", "middle": [ "J" ], "last": "Zeng", "suffix": "" }, { "first": "Q", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Y", "middle": [], "last": "Lu", "suffix": "" }, { "first": "Z", "middle": [], "last": "Chen", "suffix": "" } ], "year": 2005, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "J.T. Sun, D. Shen, H.J. Zeng, Q. Yang, Y. Lu, and Z. Chen. 2005. Web-page summarization using click-through data. In R. A. Baeza-Yates, N. Ziviani, G. Marchionini, A. Moffat, and J. Tait, editors, SIGIR. ACM.", "links": null }, "BIBREF33": { "ref_id": "b33", "title": "The pythy summarization system: Microsoft research at DUC2007", "authors": [ { "first": "K", "middle": [], "last": "Toutanova", "suffix": "" }, { "first": "C", "middle": [], "last": "Brockett", "suffix": "" }, { "first": "M", "middle": [], "last": "Gamon", "suffix": "" }, { "first": "J", "middle": [], "last": "Jagarlamudi", "suffix": "" }, { "first": "H", "middle": [], "last": "Suzuki", "suffix": "" }, { "first": "L", "middle": [], "last": "Vanderwende", "suffix": "" } ], "year": 2007, "venue": "DUC 2007: Document Understanding Conference", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "K. Toutanova, C. Brockett, M. Gamon, J. Jagarla- mudi, H. Suzuki, and L. Vanderwende. 2007. The pythy summarization system: Microsoft research at DUC2007. In DUC 2007: Document Understanding Conference, April 26-27, 2007, Rochester, NY, USA.", "links": null }, "BIBREF34": { "ref_id": "b34", "title": "Microsoft research at DUC2006: Task-focused summarization with sentence simplification", "authors": [ { "first": "L", "middle": [], "last": "Vanderwende", "suffix": "" }, { "first": "H", "middle": [], "last": "Suzuki", "suffix": "" }, { "first": "C", "middle": [], "last": "Brockett", "suffix": "" } ], "year": 2006, "venue": "DUC 2006: Document Understanding Workshop", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "L. Vanderwende, H. Suzuki, and C. Brockett. 2006. Mi- crosoft research at DUC2006: Task-focused summa- rization with sentence simplification. In DUC 2006: Document Understanding Workshop, June 8-9, 2006, Brooklyn, NY, USA.", "links": null }, "BIBREF35": { "ref_id": "b35", "title": "Wikipedia org", "authors": [ { "first": "", "middle": [], "last": "Wikipedia", "suffix": "" }, { "first": "", "middle": [], "last": "Org", "suffix": "" } ], "year": 2007, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Wikipedia.org. 2007. Wikipedia org.", "links": null }, "BIBREF36": { "ref_id": "b36", "title": "Multi-document summarization by maximizing informative content words", "authors": [ { "first": "W", "middle": [ "T" ], "last": "Yih", "suffix": "" }, { "first": "J", "middle": [], "last": "Goodman", "suffix": "" }, { "first": "L", "middle": [], "last": "Vanderwende", "suffix": "" }, { "first": "H", "middle": [], "last": "Suzuki", "suffix": "" } ], "year": 2007, "venue": "IJCAI 2007: 20th International Joint Conference on Artificial Intelligence", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "W.T. Yih, J. Goodman, L. Vanderwende, and H. Suzuki. 2007. Multi-document summarization by maximizing informative content words. In IJCAI 2007: 20th In- ternational Joint Conference on Artificial Intelligence, January, 2007.", "links": null } }, "ref_entries": { "FIGREF0": { "type_str": "figure", "uris": null, "num": null, "text": "Example document containing highlights and article text. Sentences are numbered by their position. Article is from(CNN.com, 2007b)." }, "TABREF0": { "type_str": "table", "html": null, "text": "The bird flu virus remains hard for humans to catch, but health experts fear H5N1 may mutate into a form that could spread easily among humans and possibly kill millions in a flu pandemic. 11. Amid a new H5N1 outbreak reported in recent weeks in Nigeria's north, hundreds of miles from Lagos, health workers have begun a cull of poultry. 12. Bird flu is", "content": "
TIMESTAMP: 1:59 p.m. EST, January 31, 2007
TITLE: Nigeria reports first human death from bird flu
HIGHLIGHT 1: Government boosts surveillance after woman dies
HIGHLIGHT 2: Egypt, Djibouti also have reported bird flu in humans
HIGHLIGHT 3: H5N1 bird flu virus has killed 164 worldwide since 2003
ARTICLE: 1. Health officials reported Nigeria's first cases of bird flu in humans on Wednesday,
saying one woman had died and a family member had been infected but was responding to
treatment. 2. The victim, a 22-year old woman in Lagos, died January 17, Information Minister
Frank Nweke said in a statement. 3. He added that the government was boosting surveillance
across Africa's most-populous nation after the infections in Lagos, Nigeria's biggest city. 4.
The World Health Organization had no immediate confirmation. 5. Nigerian health officials
earlier said 14 human samples were being tested. 6. Nweke made no mention of those cases on
Wednesday. 7. An outbreak of H5N1 bird flu hit Nigeria last year, but no human infections had
been reported until Wednesday. 8. Until the Nigerian report, Egypt and Djibouti were the only
African countries that had confirmed infections among people. 9. Eleven people have died in
Egypt. 10.
", "num": null }, "TABREF1": { "type_str": "table", "html": null, "text": "Features used in our model.", "content": "
sentence position, terms common with the title, ap-
pearance of keyword terms, and other cue phrases
is known as the Edmundsonian Paradigm (Edmund
", "num": null }, "TABREF2": { "type_str": "table", "html": null, "text": "Results on ordered highlights task with standard error at 95% confidence. Bold indicates significance under paired tests.", "content": "
SystemSent. # ROUGE-1
BaselineS 10.167
NetSum(1)S 10.167
BaselineS 20.111
NetSum(2)S 10.556
BaselineS 30.000
NetSum(3)S 150.400
", "num": null } } } }