{ "paper_id": "D15-1012", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T16:28:36.471780Z" }, "title": "Phrase-based Compressive Cross-Language Summarization", "authors": [ { "first": "Jin-Ge", "middle": [], "last": "Yao", "suffix": "", "affiliation": {}, "email": "yaojinge@pku.edu.cn" }, { "first": "Xiaojun", "middle": [], "last": "Wan", "suffix": "", "affiliation": {}, "email": "wanxiaojun@pku.edu.cn" }, { "first": "Jianguo", "middle": [], "last": "Xiao", "suffix": "", "affiliation": {}, "email": "xiaojianguo@pku.edu.cn" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "The task of cross-language document summarization is to create a summary in a target language from documents in a different source language. Previous methods only involve direct extraction of automatically translated sentences from the original documents. Inspired by phrasebased machine translation, we propose a phrase-based model to simultaneously perform sentence scoring, extraction and compression. We design a greedy algorithm to approximately optimize the score function. Experimental results show that our methods outperform the state-of-theart extractive systems while maintaining similar grammatical quality.", "pdf_parse": { "paper_id": "D15-1012", "_pdf_hash": "", "abstract": [ { "text": "The task of cross-language document summarization is to create a summary in a target language from documents in a different source language. Previous methods only involve direct extraction of automatically translated sentences from the original documents. Inspired by phrasebased machine translation, we propose a phrase-based model to simultaneously perform sentence scoring, extraction and compression. We design a greedy algorithm to approximately optimize the score function. Experimental results show that our methods outperform the state-of-theart extractive systems while maintaining similar grammatical quality.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "The task of cross-language summarization is to produce a summary in a target language from documents written in a different source language. This task is particularly useful for readers to quickly get the main idea of documents written in a source language that they are not familiar with. Following Wan (2011), we focus on English-to-Chinese summarization in this work.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The simplest and the most straightforward way to perform cross-language summarization is pipelining general summarization and machine translation. Such systems either translate all the documents before running generic summarization algorithms on the translated documents, or summarize from the original documents and then only translate the produced summary into the target language. Wan (2011) show that such pipelining approaches are inferior to methods that utilize information from both sides. In that work, the author proposes graph-based models and achieves fair amount of improvement. However, to the best of our knowledge, no previous work of this task tries to focus on summarization beyond pure sentence extraction.", "cite_spans": [ { "start": 384, "end": 394, "text": "Wan (2011)", "ref_id": "BIBREF17" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "On the other hand, cross-language summarization can be seen as a special kind of machine translation: translating the original documents into a brief summary in a different language. Inspired by phrase-based machine translation models (Koehn et al., 2003) , we propose a phrase-based scoring scheme for cross-language summarization in this work.", "cite_spans": [ { "start": 235, "end": 255, "text": "(Koehn et al., 2003)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Since our framework is based on phrases, we are not limited to produce extractive summaries. We can use the scoring scheme to perform joint sentence selection and compression. Unlike typical sentence compression methods, our proposed algorithm does not require additional syntactic preprocessing such as part-of-speech tagging or syntactic parsing. We only utilize information from translated texts with phrase alignments. The scoring function consists of a submodular term of compressed sentences and a bounded distortion penalty term. We design a greedy procedure to efficiently get approximate solutions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "For experimental evaluation, we use the DUC2001 dataset with manually translated reference Chinese summaries. Results based on the ROUGE metrics show the effectiveness of our proposed methods. We also conduct manual evaluation and the results suggest that the linguistic quality of produced summaries is not decreased by too much, compared with extractive counterparts. In some cases, the grammatical smoothness can even be improved by compression.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The contributions of this paper include:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 Utilizing the phrase alignment information, we design a scoring scheme for the crosslanguage document summarization task.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 We design an efficient greedy algorithm to generate summaries. The greedy algorithm is partially submodular and has a provable constant approximation factor to the optimal solution up to a small constant.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 We achieve state-of-the-art results using the extractive counterpart of our compressive summarization framework. Performance in terms of ROUGE metrics can be significantly improved when simultaneously performing extraction and compression.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Document summarization can be treated as a special kind of translation process: translating from a bunch of related source documents to a short target summary. This analogy also holds for crosslanguage document summarization, with the only difference that the languages of source documents and the target summary are different. Our design of sentence scoring function for cross-language document summarization purpose is inspired by phrase-based machine translation models. Here we briefly describe the general idea of phrase-based translation. One may refer to Koehn (2009) for more detailed description.", "cite_spans": [ { "start": 562, "end": 574, "text": "Koehn (2009)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Background", "sec_num": "2" }, { "text": "Phrase-based machine translation models are currently giving state-of-the-art translations for many pairs of languages and dominating modern statistical machine translation. Classical word-based IBM models cannot capture local contextual information and local reordering very well. Phrasebased translation models operate on lexical entries with more than one word on the source language and the target language. The allowance of multiword expressions is believed to be the main reason for the improvements that phrase-based models give. Note that these multi-word expressions, typically addressed as phrases in machine translation literature, are essentially continuous n-grams and do not need to be linguistically integrate and meaningful constituents.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Phrase-based Machine Translation", "sec_num": "2.1" }, { "text": "Define y as a phrase-based derivation, or more precisely a finite sequence of phrases p 1 , p 2 , . . . , p L . For any derivation y we use e(y) to refer to the target-side translation text defined by y. This translation is derived by concatenating the strings e(p 1 ), e(p 2 ), . . . , e(p L ). The scoring scheme for a phrase-based derivation y from the source sentence to the target sentence e(y) is:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Phrase-based Machine Translation", "sec_num": "2.1" }, { "text": "f (y) = L k=1 g(p k ) + LM (e(y)) + L\u22121 k=1 \u03b7|start(p k+1 ) \u2212 1 \u2212 end(p k )|", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Phrase-based Machine Translation", "sec_num": "2.1" }, { "text": "where LM (\u2022) is the target-side language model score, g(\u2022) is the score function of phrases, \u03b7 < 0 is the distortion parameter for penalizing the distance between neighboring phrases in the derivation. Note that the phrases addressed here are typically continuous n-grams and need not to be grammatical linguistic phrasal units. Later we will directly use phrases provided by modern machine translation systems.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Phrase-based Machine Translation", "sec_num": "2.1" }, { "text": "Searching for the best translation under this score definition is difficult in general. Thus approximate decoding algorithms such as beam search should be applied. Meanwhile, several constraints should be satisfied during the decoding process. The most important one is to set a constant limit of the distortion term |start(p k+1 ) \u2212 1 \u2212 end(p k )| \u2264 \u03b4 to exhibit derivations with distant phrase translations.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Phrase-based Machine Translation", "sec_num": "2.1" }, { "text": "Inspired by the general idea of phrase-based machine translation, we describe our proposed phrase-based model for cross-language summarization in this section.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Phrase-based Cross-Language Summarization", "sec_num": "3" }, { "text": "In the context of cross-language summarization, here we assume that we can also have phrases in both source and target languages along with phrase alignments between the two sides. For summarization purposes, we may wish to select sentences containing more important phrases. Then it is plausible to measure the scores of these aligned phrases via importance weighing. Inspired by phrase-based translation models, we can assign phrase-based scores to sentences from the translated documents for summarization purposes. We define our scoring function for each sentence s as:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Phrase-based Sentence Scoring", "sec_num": "3.1" }, { "text": "F (s) = p\u2208s d 0 g(p) + bg(s) +\u03b7 dist(y(s))", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Phrase-based Sentence Scoring", "sec_num": "3.1" }, { "text": "Here in the first term g(\u2022) is the score of phrase p, which can be simply set to document frequency. The phrase score is penalized with a constant damping factor d 0 to decay scores for repeated phrases. The second term bg(s) is the bigram score of sentence s. It is used here to simulate the effect of language models in phrase-based translation models. Denoting y(s) as the phrasebased derivation (as mentioned earlier in the previous section) of sentence s, the last distortion term", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Phrase-based Sentence Scoring", "sec_num": "3.1" }, { "text": "dist(y(s)) = L k=1 |start(p k+1 ) \u2212 1 \u2212 end(p k )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Phrase-based Sentence Scoring", "sec_num": "3.1" }, { "text": "| is exactly the same as the distortion penalty term in phrase-based translation models. This term can be used as a reflection of complexity of the translation. All the above terms can be derived from bilingual sentence pairs with phrase alignments.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Phrase-based Sentence Scoring", "sec_num": "3.1" }, { "text": "Meanwhile, we may also wish to exclude unimportant phrases and badly translated phrases. Our definition can also be used to guide sentence compression by trying to remove redundant phrase.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Phrase-based Sentence Scoring", "sec_num": "3.1" }, { "text": "Based on the definition over sentences, we define our summary scoring measure over a summary S:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Phrase-based Sentence Scoring", "sec_num": "3.1" }, { "text": "F (S) = p\u2208S count(p,S) i=1 d i\u22121 g(p) + s\u2208S bg(s) +\u03b7 s\u2208S dist(y(s))", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Phrase-based Sentence Scoring", "sec_num": "3.1" }, { "text": "where d is a predefined constant damping factor to penalize repeated occurrences of the same phrases, count(p, S) is the number of occurrences in the summary S for phrase p. All other terms are inherited from the sentence score definition.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Phrase-based Sentence Scoring", "sec_num": "3.1" }, { "text": "In the next section we describe our framework to efficiently utilize this scoring function for crosslanguage summarization.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Phrase-based Sentence Scoring", "sec_num": "3.1" }, { "text": "Utilizing the phrase-based score definition of sentences, we can use greedy algorithms to simultaneously perform sentence selection and sentence compression. Assuming that we have a predefined budget B (e.g. total number of Chinese characters allowed) to restrict the total length of a generated summary. We use C(S) to denote the cost of a summary S, measured by the number of Chinese characters contained in total. The greedy algorithm we will use for our compressive summarization is listed in Algorithm 1.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A Greedy Algorithm for Compressed Sentence Selection", "sec_num": "3.2" }, { "text": "Algorithm 1 A greedy algorithm for phrase-based summarization 1: S \u2190 \u2205 2: i \u2190 1 3: single best = argmax s\u2208U,C({s})\u2264B F ({s}) 4: while U = \u2205 do 5:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A Greedy Algorithm for Compressed Sentence Selection", "sec_num": "3.2" }, { "text": "s i = argmax s\u2208U F (S i\u22121 \u222a{s})\u2212F (S i\u22121 ) C({s}) r 6: if C(S i\u22121 \u222a {s}) \u2264 B then 7: S i \u2190 S i\u22121 \u222a {s} 8: i \u2190 i + 1 9:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A Greedy Algorithm for Compressed Sentence Selection", "sec_num": "3.2" }, { "text": "end if 10:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A Greedy Algorithm for Compressed Sentence Selection", "sec_num": "3.2" }, { "text": "U \u2190 U \\ {s i } 11: end while 12: return S * = argmax S\u2208{single best,S i } F (S)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A Greedy Algorithm for Compressed Sentence Selection", "sec_num": "3.2" }, { "text": "The space U denotes the set of all possible compressed sentences. In each iteration, the algorithm tries to find the compressed sentence with maximum gain-cost ratio (Line 5, where we will follow previous work to set r = 1), and merge it to the summary set at the current iteration (denoted as S i ). The target is to find the compression with maximum gain-cost ratio. This will be discussed in the next section. Note that the algorithm is also naturally applicable to extractive summarization. For extractive summarization, Line 5 corresponds to direct calculations of sentence scores based on our proposed phrase-based function and U will denote all full sentences from the original translated documents.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A Greedy Algorithm for Compressed Sentence Selection", "sec_num": "3.2" }, { "text": "The outline of this algorithm is very similar to the greedy algorithm used by Morita et al. (2013) for subtree extraction, except that in our context the increase of cost function when adding a sentence is exactly the cost of that sentence.", "cite_spans": [ { "start": 78, "end": 98, "text": "Morita et al. (2013)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "A Greedy Algorithm for Compressed Sentence Selection", "sec_num": "3.2" }, { "text": "When the distortion term is ignored (\u03b7 = 0), the scoring function is clearly submodular 1 (Lin and Bilmes, 2010) in terms of the set of compressed sentences, since the score now only consists of functional gains of phrases along with bigrams of a compressed sentence. Morita et al. (2013) have proved that when r = 1, this greedy algorithm will achieve a constant approximation factor 1 2 (1 \u2212 e \u22121 ) to the optimal solution. Note that this only gives us the worst case guarantee. What we can achieve in practice is usually far better.", "cite_spans": [ { "start": 268, "end": 288, "text": "Morita et al. (2013)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "A Greedy Algorithm for Compressed Sentence Selection", "sec_num": "3.2" }, { "text": "On the other hand, setting \u03b7 < 0 will not affect the performance guarantee too much. Intuitively this is because in most phrase-based translation models a distortion limit constraint |start(p k+1 )\u2212 1 \u2212 end(p k )| \u2264 \u03b4 will be applied on distortion terms, while performing sentence compression can never increase distortion. The main conclusion is formulated as:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A Greedy Algorithm for Compressed Sentence Selection", "sec_num": "3.2" }, { "text": "Theorem 1. If Algorithm 1 outputs S greedy while the optimal solution is OP T , we have", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A Greedy Algorithm for Compressed Sentence Selection", "sec_num": "3.2" }, { "text": "F (S greedy ) \u2265 1 2 (1 \u2212 e \u22121 )F (OP T ) + 1 2 \u03b7\u03b3.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A Greedy Algorithm for Compressed Sentence Selection", "sec_num": "3.2" }, { "text": "Here \u03b3 > 0 is a constant controlled by distortion difference between sentences, which is relatively small in practice compared with phrase scores. \u03b7 < 0 is the distortion parameter. Note that when \u03b7 is set to be 0, the scoring function is submodular and then we recover the 1 2 (1 \u2212 e \u22121 ) approximation factor as studied by Morita et al. (2013) . We leave the proof of Theorem 1 to supplementary materials due to space limit. The submodularity term in the score plays an important role in the proof.", "cite_spans": [ { "start": 325, "end": 345, "text": "Morita et al. (2013)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "A Greedy Algorithm for Compressed Sentence Selection", "sec_num": "3.2" }, { "text": "In Algorithm 1, the most important part is the greedy selection process (Line 5). The greedy selection criteria here is to maximize the gain-cost ratio. For compressive summarization, we are trying to compress each unselected sentence s tos, aiming at maximizing the gain-cost ratio, where the gain corresponds to", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Finding the Maximum Density Compression", "sec_num": "3.3" }, { "text": "F (S i\u22121 \u222a {s}) \u2212 F (S i\u22121 ) = p\u2208s count(p,S) i=1 d i\u22121 g(p) + bg(s) + \u03b7dist(s),", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Finding the Maximum Density Compression", "sec_num": "3.3" }, { "text": "and then add the compressed sentences with maximum gain-cost ratio to the summary. We will also address the compression process for each sentence as finding the maximum density compression. The whole framework forms a joint selection and compression process. In our phrase-based scoring for sentences, although there exist no apparent optimal substructure available for exact dynamic programming due to nonlocal distortion penalty, we can have a tractable approximate procedure since the search space is only defined by local decisions on whether a phrase should be kept or dropped.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Finding the Maximum Density Compression", "sec_num": "3.3" }, { "text": "Our compression process for each sentence s is displayed in Algorithm 2. It gradually expands the set of phrases to be kept in the final compression, from the initial set of large density phrases (Line 4, assuming that phrases with large scores and small costs will always be kept), we can recover the compression with maximum density. The function dist(\u2022, \u2022) is the unit distortion penalty defined as dist(a, b) = |start(b) \u2212 1 \u2212 end(a)|. We define p.score to be the sum of damped phrase score for phrase p, i.e. p.score =", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Finding the Maximum Density Compression", "sec_num": "3.3" }, { "text": "count(p,S i\u22121 ) i=1 d i\u22121 g(p)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Finding the Maximum Density Compression", "sec_num": "3.3" }, { "text": ", when the current partial summary is S i\u22121 . Therefore during each iteration of the greedy selection process, the compression procedure will also be affected by sentences that have already been included. Define p.cost as the number of words p contains.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Finding the Maximum Density Compression", "sec_num": "3.3" }, { "text": "Algorithm 2 A growing algorithm for finding the maximum density compressed sentence", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Finding the Maximum Density Compression", "sec_num": "3.3" }, { "text": "1: function GET MAX DENSITY COMPRESSION(s, Si\u22121) 2:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Finding the Maximum Density Compression", "sec_num": "3.3" }, { "text": "queue Q \u2190 \u2205, kept \u2190 \u2205 3:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Finding the Maximum Density Compression", "sec_num": "3.3" }, { "text": "for each phrase p in s.phrases do 4:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Finding the Maximum Density Compression", "sec_num": "3.3" }, { "text": "if p.score/p.cost > 1 then 5:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Finding the Maximum Density Compression", "sec_num": "3.3" }, { "text": "kept \u2190 kept \u222a{p} 6:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Finding the Maximum Density Compression", "sec_num": "3.3" }, { "text": "Q.enqueue(p) 7:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Finding the Maximum Density Compression", "sec_num": "3.3" }, { "text": "end if 8: end for 9:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Finding the Maximum Density Compression", "sec_num": "3.3" }, { "text": "while Q = \u2205 do 10: p \u2190 Q.deque() 11:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Finding the Maximum Density Compression", "sec_num": "3.3" }, { "text": "ppv \u2190 p.previous phrase, pnx \u2190 p.next phrase 12:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Finding the Maximum Density Compression", "sec_num": "3.3" }, { "text": "if ppv.score+bg(ppv,p)+\u03b7dist (ppv,p) ppv.cost+p.cost > 1 then 13:", "cite_spans": [ { "start": 29, "end": 36, "text": "(ppv,p)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Finding the Maximum Density Compression", "sec_num": "3.3" }, { "text": "Q.enqueue(ppv), kept \u2190 kept \u222a{ppv} 14:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Finding the Maximum Density Compression", "sec_num": "3.3" }, { "text": "end if 15:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Finding the Maximum Density Compression", "sec_num": "3.3" }, { "text": "if pnx.score+bg(pnx,p)+\u03b7dist (p,pnx) p.cost+pnx.cost > 1 then 16:", "cite_spans": [ { "start": 29, "end": 36, "text": "(p,pnx)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Finding the Maximum Density Compression", "sec_num": "3.3" }, { "text": "Q.enqueue(pnx), kept \u2190 kept \u222a{pnx} 17:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Finding the Maximum Density Compression", "sec_num": "3.3" }, { "text": "end if 18: end while 19: returns = kept, ratio =", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Finding the Maximum Density Compression", "sec_num": "3.3" }, { "text": "F (S i\u22121 \u222a{s})\u2212F (S i\u22121 )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Finding the Maximum Density Compression", "sec_num": "3.3" }, { "text": "s.cost", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Finding the Maximum Density Compression", "sec_num": "3.3" }, { "text": "Empirically we find this procedure gives almost the same results with exhaustive search while maintaining efficiency. Assuming that sentence length is no more than L, then the asymptotic complexity of Algorithm 2 will be O(L) since the algorithm requires two passes of all phrases. Therefore the whole framework requires O(kN L) time for a document cluster containing N sentences in total to generate a summary with k sentences.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "20: end function", "sec_num": null }, { "text": "In the final compressed sentence we just leave the selected phrases continuously as they are, relying on bigram scores to ensure local smoothness. The task is after all a summarization task, where bigram scores play a role of not only controlling grammaticality but keeping main information of the original documents.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "20: end function", "sec_num": null }, { "text": "Later we will see that this compression process will not hurt grammatical fluency of translated sentences in general. In many cases it may even improve fluency by deleting redundant parentheses or removing incorrectly reordered (unimportant) phrases.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "20: end function", "sec_num": null }, { "text": "Currently there are not so many available datasets for our particular setting of the cross-language summarization task. Hence we only evaluate our method on the same dataset used by Wan (2011) . The dataset is created by manually translating the reference summaries into Chinese from the original DUC 2001 dataset in English. We will refer to this dataset as the DUC 2001 dataset in this paper. There are 30 English document sets in the DUC 2001 dataset for multi-document summarization. Each set contains several documents related to the same topic. Three generic reference English summaries are provided by NIST annotators for each document set. All these English summaries have been translated to Chinese by native Chinese annotators.", "cite_spans": [ { "start": 182, "end": 192, "text": "Wan (2011)", "ref_id": "BIBREF17" } ], "ref_spans": [], "eq_spans": [], "section": "Data", "sec_num": "4.1" }, { "text": "All the English sentences in the original documents have been automatically translated into Chinese using Google Translate. We also collect the phrase alignment information from the responses of Google Translate (stored in JSON format) along with the translated texts. We use the Stanford Chinese Word Segmenter 2 for Chinese word segmentation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data", "sec_num": "4.1" }, { "text": "The parameters in the algorithms are simply set to be r = 1, d = 0.5, \u03b7 = \u22120.5.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data", "sec_num": "4.1" }, { "text": "We will report the performance of our compressive solution, denoted as PBCS (for Phrase-Based Compressive Summarization), with comparisons of the following systems:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation", "sec_num": "4.2" }, { "text": "\u2022 PBES: The acronym comes from Phrase-Based Extractive Summarization. It is the extractive counterpart of our solution without calling Algorithm 2.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation", "sec_num": "4.2" }, { "text": "\u2022 Baseline (EN): This baseline relies on merely the English-side information for En-glish sentence ranking in the original documents. The scoring function is designed to be document frequencies of English bigrams, which is similar to the second term in our proposed sentence scoring function in Section 3.1 and is submodular. 3 The extracted English summary is finally automatically translated into the corresponding Chinese summary. This is also known as the summary translation scheme.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation", "sec_num": "4.2" }, { "text": "\u2022 Baseline (CN): This baseline relies on merely the Chinese-side information for Chinese sentence ranking. The scoring function is similarly defined by document frequency of Chinese bigrams. The Chinese summary sentences are then directly extracted from the translated Chinese documents. This is also known as the document translation scheme.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation", "sec_num": "4.2" }, { "text": "\u2022 CoRank: We reimplement the graph-based CoRank algorithm, which gives the state-ofthe-art performance on the same DUC 2001 dataset for comparison.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation", "sec_num": "4.2" }, { "text": "\u2022 Baseline (ENcomp): This is a compressive baseline where the extracted English sentences in Baseline (EN) will be compressed before being translated to Chinese. The compression process follows from an integer linear program as described by Clarke and Lapata (2008) . This baseline gives strong performance as we have found on English DUC 2001 dataset as well as other monolingual datasets.", "cite_spans": [ { "start": 241, "end": 265, "text": "Clarke and Lapata (2008)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Evaluation", "sec_num": "4.2" }, { "text": "We experiment with two kinds of summary budgets for comparative study. The first one is limiting the summary length to be no more than five sentences. The second one is limiting the total number of Chinese characters of each produced summary to be no more than 300. They will be addressed as Sentence Budgeting and Character Budgeting in the experimental results respectively. Similar to traditional summarization tasks, we use the ROUGE metrics for automatic evaluation of all systems in comparison. The ROUGE metrics measure summary quality by counting overlapping word units (e.g. n-grams) between the candidate summary and the reference summary. Following previous work in the same task, we report the following ROUGE F-measure scores: ROUGE-1 (unigrams), ROUGE-2 (bigrams), ROUGE-W (weighted longest common subsequence; weight=1.2), ROUGE-L (longest common subsequences), and ROUGE-SU4 (skip bigrams with a maximum distance of 4). Here we investigate two kinds of ROUGE metrics for Chinese: ROUGE metrics based on words (after Chinese word segmentation) and ROUGE metrics based on singleton Chinese characters. The latter metrics will not suffer from the problem of word segmentation inconsistency.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation", "sec_num": "4.2" }, { "text": "To compare our method with extractive baselines in terms of information loss and grammatical quality, we also ask three native Chinese students as annotators to carry out manual evaluation. The aspects considered during evaluation include Grammaticality (GR), Non-Redundancy (NR), Referential Clarity (RC), Topical Focus (TF) and Structural Coherence (SC). Each aspect is rated with scores from 1 (poor) to 5 (good) 4 . This evaluation is performed on the same random sample of 10 document sets from the DUC 2001 dataset. One group of the gold-standard summaries is left out for evaluation of human-level performance. The other two groups are shown to the annotators, giving them a sense of topics talked about in the document sets. Table 1 and Table 2 display the ROUGE results for our proposed methods and the baseline methods, including both word-based and character-based evaluation. We also conduct pairwise t-test and find that almost all the differences between PBCS and other systems are statistically significant with p 0.01 5 except for the ROUGE-W metric. We have the same observations with previous work on the inferiority of using information from only one-side, while using Chinese-side information only is more beneficial than English-side only. The CoRank algorithm utilizes both sides of information together and achieves significantly better performance over Baseline (EN) and Baseline(CN). Our compressive system outperforms the CoRank algorithm 6 in all metrics.", "cite_spans": [ { "start": 1386, "end": 1390, "text": "(EN)", "ref_id": null } ], "ref_spans": [ { "start": 733, "end": 752, "text": "Table 1 and Table 2", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Evaluation", "sec_num": "4.2" }, { "text": "Also our system overperforms the compressive pipelining system (Baseline(ENcomp)) as well. Note that the latter only considers information from the source language side. Meanwhile sentence compression may sometimes causes worse translations compared with translating the full original sentence.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results and Discussion", "sec_num": "4.3" }, { "text": "For manual evaluation, the average score and standard deviation for each metric is displayed in Table 3 . From the comparison between compressive summarization and the extractive version, there exist slight improvements of nonredundancy. This exactly matches what we can expect from sentence compression that keeps only important part and drop redundancy. We also observe certain amount of improvements on referential clarity. This may be a result of deletions of some phrases containing pronouns, such as he said. Most of such phrases are semantically unimportant and will be dropped during the process of finding the maximum density compression.", "cite_spans": [], "ref_spans": [ { "start": 96, "end": 103, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "Results and Discussion", "sec_num": "4.3" }, { "text": "Despite not directly using syntactic information, our compressive summaries do not suffer too much loss of grammaticality. This suggest that bigrams can be treated as good indicators of local grammatical smoothness. We reckon that sentences describing the same events may partially share descriptive bigram patterns, thus sentences selected by the algorithm will consist of mostly important patterns that appear repeatedly in the original document cluster. Only those words that are neither semantically important nor syntactically pivotal will be deleted. Figure 1 lists the summaries for the first document set D04 in the DUC 2001 dataset produced by the proposed compressive system. The Chinese side sentences have been split with spaces according to phrase alignment results. Phrases that have been compressed are grayed out. We also include original English sentences for reference, with deletions according to word alignments from the Chinese sentences. We can observe that our compressive system tries to compress sentences by removing relatively unimportant phrases. The effect of translation errors (e.g. the word watch in on storm watch has been incorrectly translated in the example) can also be reduced since those incorrectly translated words will be dropped for having low information gains. In some cases the gram- Wan (2011) . We believe that this comes from different machine translation results output by Google Translate. Table 3 : Manual evaluation results matical fluency can even be improved from sentence compression, as redundant parentheses may sometimes be removed. We leave the output summaries from all systems for the same document set to supplementary materials.", "cite_spans": [ { "start": 1330, "end": 1340, "text": "Wan (2011)", "ref_id": "BIBREF17" } ], "ref_spans": [ { "start": 557, "end": 565, "text": "Figure 1", "ref_id": "FIGREF0" }, { "start": 1441, "end": 1448, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "Results and Discussion", "sec_num": "4.3" }, { "text": "In our experiments, we also study the influence of relevant parameter settings. Figure 2a depicts the variation of ROUGE-2 F-measure when changing the damping factor d from different values in {1, 2 \u22121 , 3 \u22121 , 4 \u22121 , 5 \u22121 }, while \u03b7 = \u22120.5 being fixed. We can see that under proper range the value of d does not effect the result for too much. No damping or too much damping will severely decrease the performance. Figure 2b shows the performance change under different settings of the distortion parameter \u03b7 taking values from {0, \u22120.2, \u22120.5, \u22121, \u22123}, while fixing d = 0.5. The results suggest that, for our purposes of summarization, the difference of considering distortion penalty or not is obvious. At certain level, the effect brought by different values distortion parameter becomes stable.", "cite_spans": [], "ref_spans": [ { "start": 80, "end": 89, "text": "Figure 2a", "ref_id": null }, { "start": 416, "end": 425, "text": "Figure 2b", "ref_id": null } ], "eq_spans": [], "section": "Results and Discussion", "sec_num": "4.3" }, { "text": "We also empirically study the effect of approximation. The compressive summarization framework proposed in this paper can be trivially cast into an integer linear program (ILP), with the number of variables being too large to make the problem tractable 7 . In this experiment, we use Figure 2c , we depict the objective value achieved by ILP as exact solution, comparing with results from sentences which are gradually selected and compressed by our greedy algorithm. We can see that the approximation is close.", "cite_spans": [], "ref_spans": [ { "start": 284, "end": 293, "text": "Figure 2c", "ref_id": null } ], "eq_spans": [], "section": "Results and Discussion", "sec_num": "4.3" }, { "text": "The task focused in this paper is cross-language document summarization. Several pilot studies have investigated this task. Before Wan (2011)'s work that explicitly utilizes bilingual information in a graph-based framework, earlier methods often use information only from one language (de Chalendar et al., 2005; Pingali et al., 2007; Orasan and Chiorean, 2008; Litvak et al., 2010) .", "cite_spans": [ { "start": 289, "end": 312, "text": "Chalendar et al., 2005;", "ref_id": "BIBREF5" }, { "start": 313, "end": 334, "text": "Pingali et al., 2007;", "ref_id": "BIBREF16" }, { "start": 335, "end": 361, "text": "Orasan and Chiorean, 2008;", "ref_id": "BIBREF15" }, { "start": 362, "end": 382, "text": "Litvak et al., 2010)", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "5" }, { "text": "This work is closely related to greedy algorithms for budgeted submodular maximization. Many studies have formalized text summarization tasks as submodular maximization problems (Lin and Bilmes, 2010; Lin and Bilmes, 2011; Morita et al., 2013) . A more recent work (Dasgupta et al., 2013) discussed the problem of maximizing a function with a submodular part and a nonsubmodular dispersion term, which may appear to be closer to our scoring functions.", "cite_spans": [ { "start": 178, "end": 200, "text": "(Lin and Bilmes, 2010;", "ref_id": "BIBREF11" }, { "start": 201, "end": 222, "text": "Lin and Bilmes, 2011;", "ref_id": "BIBREF12" }, { "start": 223, "end": 243, "text": "Morita et al., 2013)", "ref_id": "BIBREF14" }, { "start": 265, "end": 288, "text": "(Dasgupta et al., 2013)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "5" }, { "text": "In recent years, some research has made progress beyond extractive summarization, espethe original maximization problem with pruned brute-force enumeration and therefore exactly optimal but too costly.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "5" }, { "text": "8 http://lpsolve.sourceforge.net/ cially in the context of compressive summarization. Zajic et al. (2006) tries a pipeline strategy with heuristics to generate multiple candidate compressions and extract from this compressed sentences. Berg-Kirkpatrick et al. (2011) create linear models of weights learned by structural SVMs for different components and tried to jointly formulate sentence selection and syntax tree trimming in integer linear programs. Woodsend and Lapata (2012) propose quasi tree substitution grammars for multiple rewriting operations. All these methods involve integer linear programming solvers to generate compressed summaries, which is time-consuming for multidocument summarization tasks. Almeida and Martins (2013) form the compressive summarization problem in a more efficient dual decomposition framework. Models for sentence compression and extractive summarization are trained by multitask learning techniques. Wang et al. (2013) explore different types of compression on constituent parse trees for query-focused summarization. Li et al. (2013) propose a guided sentence compression model with ILP-based summary sentence selection. Their following work (Li et al., 2014) incorporate various constraints on constituent parse trees to improve the linguistic quality of the compressed sentences. In these studies, the bestperforming systems require supervised learning for different subtasks. More recent work tries to formulate document summarization tasks as optimization problems and use their solutions to guide sentence compression (Li et al., 2015; Yao et al., 2015) . employ integer linear programming for conducting phrase selection and merging simultaneously to form compressed sentences after phrase extraction.", "cite_spans": [ { "start": 86, "end": 105, "text": "Zajic et al. (2006)", "ref_id": "BIBREF21" }, { "start": 715, "end": 741, "text": "Almeida and Martins (2013)", "ref_id": "BIBREF0" }, { "start": 942, "end": 960, "text": "Wang et al. (2013)", "ref_id": "BIBREF18" }, { "start": 1060, "end": 1076, "text": "Li et al. (2013)", "ref_id": "BIBREF8" }, { "start": 1185, "end": 1202, "text": "(Li et al., 2014)", "ref_id": "BIBREF9" }, { "start": 1566, "end": 1583, "text": "(Li et al., 2015;", "ref_id": "BIBREF2" }, { "start": 1584, "end": 1601, "text": "Yao et al., 2015)", "ref_id": "BIBREF20" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "5" }, { "text": "In this paper we propose a phrase-based framework for the task of cross-language document summarization. The proposed scoring scheme can be naturally operated on compressive summarization. We use efficient greedy procedure to approximately optimize the scoring function. Experimental results show improvements of our compressive solution over state-of-the-art systems. Even though we do not explicitly use any syntactic information, the generated summaries of our system do not lose much grammaticality and fluency. The scoring function in our framework is in- spired by earlier phrase-based machine translation models. Our next step is to try more fine-grained scoring schemes using similar techniques from modern approaches of statistical machine translation. To further improve grammaticality of generated summaries, we may try to sacrifice the time efficiency for a little bit and use syntactic information provided by syntactic parsers. Our framework currently uses only the single best translation. It will be more powerful to integrate machine translation and summarization, utilizing multiple possible translations.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion and Future Work", "sec_num": "6" }, { "text": "Currently many successful statistical machine translation systems are phrase-based with alignment information provided and we utilize this fact in this work. It is interesting to explore how will the performance be affected if we are only provided with parallel sentences and then alignments can only be derived using an independent aligner.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion and Future Work", "sec_num": "6" }, { "text": "A set function F : 2 U \u2192 R defined over subsets of a universe set U is said to be submodular iff it satisfies the diminishing returns property:\u2200S \u2286 T \u2286 U \\ u, we have F (S \u222a {u}) \u2212 F (S) \u2265 F (T \u222a {u}) \u2212 F (T ).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "http://nlp.stanford.edu/software/ segmenter.shtml", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "In our experiments this method gives similar performance compared with graph-based pipelining baselines implemented in previous work.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Fractional numbers are allowed for cases where the annotators feel uncertain about.5 The significance level holds after Bonferroni adjustment, for the purpose of multiple testing.6 There exists ignorable difference between the results of our reimplemented version of CoRank and those reported by", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "By casting decisions on whether to select a certain phrase or bigram as binary variables, with additional linear constraints on phrase/bigram selection consistency, we get an ILP with essentially the same objective function and a linear budget constraint. This is conceptually equivalent to solving", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "We thank all the anonymous reviewers for helpful comments and suggestions. This work was supported by National Hi-Tech Research and Development Program (863 Program) of China (2015AA015403, 2014AA015102) and National Natural Science Foundation of China (61170166, 61331011). The contact author of this paper, according to the meaning given to this role by Peking University, is Xiaojun Wan.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Fast and robust compressive summarization with dual decomposition and multi-task learning", "authors": [ { "first": "Miguel", "middle": [], "last": "Almeida", "suffix": "" }, { "first": "Andre", "middle": [], "last": "Martins", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "196--206", "other_ids": {}, "num": null, "urls": [], "raw_text": "Miguel Almeida and Andre Martins. 2013. Fast and robust compressive summarization with dual de- composition and multi-task learning. In Proceed- ings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Pa- pers), pages 196-206, Sofia, Bulgaria, August. As- sociation for Computational Linguistics.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Jointly learning to extract and compress", "authors": [ { "first": "Taylor", "middle": [], "last": "Berg-Kirkpatrick", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Gillick", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Klein", "suffix": "" } ], "year": 2011, "venue": "Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies", "volume": "", "issue": "", "pages": "481--490", "other_ids": {}, "num": null, "urls": [], "raw_text": "Taylor Berg-Kirkpatrick, Dan Gillick, and Dan Klein. 2011. Jointly learning to extract and compress. In Proceedings of the 49th Annual Meeting of the Asso- ciation for Computational Linguistics: Human Lan- guage Technologies, pages 481-490, Portland, Ore- gon, USA, June. Association for Computational Lin- guistics.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Abstractive multidocument summarization via phrase selection and merging", "authors": [ { "first": "Lidong", "middle": [], "last": "Bing", "suffix": "" }, { "first": "Piji", "middle": [], "last": "Li", "suffix": "" }, { "first": "Yi", "middle": [], "last": "Liao", "suffix": "" }, { "first": "Wai", "middle": [], "last": "Lam", "suffix": "" }, { "first": "Weiwei", "middle": [], "last": "Guo", "suffix": "" }, { "first": "Rebecca", "middle": [], "last": "Passonneau", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing", "volume": "1", "issue": "", "pages": "1587--1597", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lidong Bing, Piji Li, Yi Liao, Wai Lam, Weiwei Guo, and Rebecca Passonneau. 2015. Abstractive multi- document summarization via phrase selection and merging. In Proceedings of the 53rd Annual Meet- ing of the Association for Computational Linguistics and the 7th International Joint Conference on Natu- ral Language Processing (Volume 1: Long Papers), pages 1587-1597, Beijing, China, July. Association for Computational Linguistics.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Global inference for sentence compression: An integer linear programming approach", "authors": [ { "first": "James", "middle": [], "last": "Clarke", "suffix": "" }, { "first": "Mirella", "middle": [], "last": "Lapata", "suffix": "" } ], "year": 2008, "venue": "Journal of Artificial Intelligence Research", "volume": "31", "issue": "", "pages": "273--381", "other_ids": {}, "num": null, "urls": [], "raw_text": "James Clarke and Mirella Lapata. 2008. Global in- ference for sentence compression: An integer linear programming approach. Journal of Artificial Intelli- gence Research, 31:273-381.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Summarization through submodularity and dispersion", "authors": [ { "first": "Anirban", "middle": [], "last": "Dasgupta", "suffix": "" }, { "first": "Ravi", "middle": [], "last": "Kumar", "suffix": "" }, { "first": "Sujith", "middle": [], "last": "Ravi", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "1014--1022", "other_ids": {}, "num": null, "urls": [], "raw_text": "Anirban Dasgupta, Ravi Kumar, and Sujith Ravi. 2013. Summarization through submodularity and disper- sion. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Vol- ume 1: Long Papers), pages 1014-1022, Sofia, Bul- garia, August. Association for Computational Lin- guistics.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Crosslingual summarization with thematic extraction, syntactic sentence simplification, and bilingual generation", "authors": [ { "first": "Romaric", "middle": [], "last": "Ga\u00ebl De Chalendar", "suffix": "" }, { "first": "Olivier", "middle": [], "last": "Besan\u00e7on", "suffix": "" }, { "first": "Gregory", "middle": [], "last": "Ferret", "suffix": "" }, { "first": "Olivier", "middle": [], "last": "Grefenstette", "suffix": "" }, { "first": "", "middle": [], "last": "Mesnard", "suffix": "" } ], "year": 2005, "venue": "Workshop on Crossing Barriers in Text Summarization Research, 5th International Conference on Recent Advances in Natural Language Processing (RANLP2005)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ga\u00ebl de Chalendar, Romaric Besan\u00e7on, Olivier Ferret, Gregory Grefenstette, and Olivier Mesnard. 2005. Crosslingual summarization with thematic extrac- tion, syntactic sentence simplification, and bilin- gual generation. In Workshop on Crossing Barri- ers in Text Summarization Research, 5th Interna- tional Conference on Recent Advances in Natural Language Processing (RANLP2005).", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Statistical phrase-based translation", "authors": [ { "first": "Philipp", "middle": [], "last": "Koehn", "suffix": "" }, { "first": "Franz", "middle": [ "Josef" ], "last": "Och", "suffix": "" }, { "first": "Marcu", "middle": [], "last": "Daniel", "suffix": "" } ], "year": 2003, "venue": "Human Language Technologies: The 2003 Annual Conference of the North American Chapter of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "48--54", "other_ids": {}, "num": null, "urls": [], "raw_text": "Philipp Koehn, Franz Josef Och, and Marcu Daniel. 2003. Statistical phrase-based translation. In Hu- man Language Technologies: The 2003 Annual Conference of the North American Chapter of the Association for Computational Linguistics, pages 48-54, Edmonton, May-June. Association for Com- putational Linguistics.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Statistical Machine Translation", "authors": [ { "first": "Philipp", "middle": [], "last": "Koehn", "suffix": "" } ], "year": 2009, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Philipp Koehn. 2009. Statistical Machine Translation. Cambridge University Press.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Document summarization via guided sentence compression", "authors": [ { "first": "Chen", "middle": [], "last": "Li", "suffix": "" }, { "first": "Fei", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Fuliang", "middle": [], "last": "Weng", "suffix": "" }, { "first": "Yang", "middle": [], "last": "Liu", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "490--500", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chen Li, Fei Liu, Fuliang Weng, and Yang Liu. 2013. Document summarization via guided sentence com- pression. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Process- ing, pages 490-500, Seattle, Washington, USA, Oc- tober. Association for Computational Linguistics.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Improving multi-documents summarization by sentence compression based on expanded constituent parse trees", "authors": [ { "first": "Chen", "middle": [], "last": "Li", "suffix": "" }, { "first": "Yang", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Fei", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Lin", "middle": [], "last": "Zhao", "suffix": "" }, { "first": "Fuliang", "middle": [], "last": "Weng", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)", "volume": "", "issue": "", "pages": "691--701", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chen Li, Yang Liu, Fei Liu, Lin Zhao, and Fuliang Weng. 2014. Improving multi-documents summa- rization by sentence compression based on expanded constituent parse trees. In Proceedings of the 2014 Conference on Empirical Methods in Natural Lan- guage Processing (EMNLP), pages 691-701, Doha, Qatar, October. Association for Computational Lin- guistics.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Reader-aware multi-document summarization via sparse coding", "authors": [ { "first": "Piji", "middle": [], "last": "Li", "suffix": "" }, { "first": "Lidong", "middle": [], "last": "Bing", "suffix": "" }, { "first": "Wai", "middle": [], "last": "Lam", "suffix": "" }, { "first": "Hang", "middle": [], "last": "Li", "suffix": "" }, { "first": "Yi", "middle": [], "last": "Liao", "suffix": "" } ], "year": 2015, "venue": "IJCAI", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Piji Li, Lidong Bing, Wai Lam, Hang Li, and Yi Liao. 2015. Reader-aware multi-document summariza- tion via sparse coding. In IJCAI.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Multi-document summarization via budgeted maximization of submodular functions", "authors": [ { "first": "Hui", "middle": [], "last": "Lin", "suffix": "" }, { "first": "Jeff", "middle": [], "last": "Bilmes", "suffix": "" } ], "year": 2010, "venue": "Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "912--920", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hui Lin and Jeff Bilmes. 2010. Multi-document sum- marization via budgeted maximization of submod- ular functions. In Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Lin- guistics, pages 912-920, Los Angeles, California, June. Association for Computational Linguistics.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "A class of submodular functions for document summarization", "authors": [ { "first": "Hui", "middle": [], "last": "Lin", "suffix": "" }, { "first": "Jeff", "middle": [], "last": "Bilmes", "suffix": "" } ], "year": 2011, "venue": "Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies", "volume": "", "issue": "", "pages": "510--520", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hui Lin and Jeff Bilmes. 2011. A class of submodu- lar functions for document summarization. In Pro- ceedings of the 49th Annual Meeting of the Associ- ation for Computational Linguistics: Human Lan- guage Technologies, pages 510-520, Portland, Ore- gon, USA, June. Association for Computational Lin- guistics.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "A new approach to improving multilingual summarization using a genetic algorithm", "authors": [ { "first": "Marina", "middle": [], "last": "Litvak", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Last", "suffix": "" }, { "first": "Menahem", "middle": [], "last": "Friedman", "suffix": "" } ], "year": 2010, "venue": "Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "927--936", "other_ids": {}, "num": null, "urls": [], "raw_text": "Marina Litvak, Mark Last, and Menahem Friedman. 2010. A new approach to improving multilingual summarization using a genetic algorithm. In Pro- ceedings of the 48th Annual Meeting of the Associa- tion for Computational Linguistics, pages 927-936, Uppsala, Sweden, July. Association for Computa- tional Linguistics.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Subtree extractive summarization via submodular maximization", "authors": [ { "first": "Hajime", "middle": [], "last": "Morita", "suffix": "" }, { "first": "Ryohei", "middle": [], "last": "Sasano", "suffix": "" }, { "first": "Hiroya", "middle": [], "last": "Takamura", "suffix": "" }, { "first": "Manabu", "middle": [], "last": "Okumura", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "1023--1032", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hajime Morita, Ryohei Sasano, Hiroya Takamura, and Manabu Okumura. 2013. Subtree extractive sum- marization via submodular maximization. In Pro- ceedings of the 51st Annual Meeting of the Associa- tion for Computational Linguistics (Volume 1: Long Papers), pages 1023-1032, Sofia, Bulgaria, August. Association for Computational Linguistics.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Evaluation of a cross-lingual romanian-english multi-document summariser", "authors": [ { "first": "Constantin", "middle": [], "last": "Orasan", "suffix": "" }, { "first": "", "middle": [], "last": "Oana Andreea Chiorean", "suffix": "" } ], "year": 2008, "venue": "LREC", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Constantin Orasan and Oana Andreea Chiorean. 2008. Evaluation of a cross-lingual romanian-english multi-document summariser. In LREC.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Experiments in cross language query focused multi-document summarization", "authors": [ { "first": "Prasad", "middle": [], "last": "Pingali", "suffix": "" }, { "first": "Jagadeesh", "middle": [], "last": "Jagarlamudi", "suffix": "" }, { "first": "Vasudeva", "middle": [], "last": "Varma", "suffix": "" } ], "year": 2007, "venue": "Workshop on Cross Lingual Information Access Addressing the Information Need of Multilingual Societies in IJCAI2007. Citeseer", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Prasad Pingali, Jagadeesh Jagarlamudi, and Vasudeva Varma. 2007. Experiments in cross language query focused multi-document summarization. In Work- shop on Cross Lingual Information Access Address- ing the Information Need of Multilingual Societies in IJCAI2007. Citeseer.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Using bilingual information for cross-language document summarization", "authors": [ { "first": "Xiaojun", "middle": [], "last": "Wan", "suffix": "" } ], "year": 2011, "venue": "Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies", "volume": "", "issue": "", "pages": "1546--1555", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xiaojun Wan. 2011. Using bilingual information for cross-language document summarization. In Pro- ceedings of the 49th Annual Meeting of the Associ- ation for Computational Linguistics: Human Lan- guage Technologies, pages 1546-1555, Portland, Oregon, USA, June. Association for Computational Linguistics.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "A sentence compression based framework to query-focused multidocument summarization", "authors": [ { "first": "Lu", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Hema", "middle": [], "last": "Raghavan", "suffix": "" }, { "first": "Vittorio", "middle": [], "last": "Castelli", "suffix": "" }, { "first": "Radu", "middle": [], "last": "Florian", "suffix": "" }, { "first": "Claire", "middle": [], "last": "Cardie", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "1384--1394", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lu Wang, Hema Raghavan, Vittorio Castelli, Radu Flo- rian, and Claire Cardie. 2013. A sentence com- pression based framework to query-focused multi- document summarization. In Proceedings of the 51st Annual Meeting of the Association for Compu- tational Linguistics (Volume 1: Long Papers), pages 1384-1394, Sofia, Bulgaria, August. Association for Computational Linguistics.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Multiple aspect summarization using integer linear programming", "authors": [ { "first": "Kristian", "middle": [], "last": "Woodsend", "suffix": "" }, { "first": "Mirella", "middle": [], "last": "Lapata", "suffix": "" } ], "year": 2012, "venue": "Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning", "volume": "", "issue": "", "pages": "233--243", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kristian Woodsend and Mirella Lapata. 2012. Mul- tiple aspect summarization using integer linear pro- gramming. In Proceedings of the 2012 Joint Con- ference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, pages 233-243. Association for Compu- tational Linguistics.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Compressive document summarization via sparse optimization", "authors": [ { "first": "Jin-Ge", "middle": [], "last": "Yao", "suffix": "" }, { "first": "Xiaojun", "middle": [], "last": "Wan", "suffix": "" }, { "first": "Jianguo", "middle": [], "last": "Xiao", "suffix": "" } ], "year": 2015, "venue": "IJCAI", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jin-ge Yao, Xiaojun Wan, and Jianguo Xiao. 2015. Compressive document summarization via sparse optimization. In IJCAI.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Sentence compression as a component of a multi-document summarization system", "authors": [ { "first": "Bonnie", "middle": [], "last": "David M Zajic", "suffix": "" }, { "first": "Jimmy", "middle": [], "last": "Dorr", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Lin", "suffix": "" }, { "first": "", "middle": [], "last": "Schwartz", "suffix": "" } ], "year": 2006, "venue": "Proceedings of the 2006 Document Understanding Workshop", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "David M Zajic, Bonnie Dorr, Jimmy Lin, and Richard Schwartz. 2006. Sentence compression as a compo- nent of a multi-document summarization system. In Proceedings of the 2006 Document Understanding Workshop, New York.", "links": null } }, "ref_entries": { "FIGREF0": { "type_str": "figure", "num": null, "uris": null, "text": "Example compressive summary lp solve package 8 as the ILP solver to obtain an exact solution on the first document cluster (D04) in DUC 2001 dataset. In" }, "FIGREF1": { "type_str": "figure", "num": null, "uris": null, "text": "Figure 2: Experimental analysis" }, "TABREF1": { "type_str": "table", "num": null, "content": "
Sentence BudgetingROUGE-1 ROUGE-2 ROUGE-W ROUGE-L ROUGE-SU4
Baseline(EN)0.348420.118230.055050.156650.12320
Baseline(CN)0.349010.120150.056640.159420.12625
PBES0.366180.122810.059130.160180.11317
CoRank (reimplemented)0.376010.125700.060880.173500.13352
Baseline(ENcomp)0.369820.130010.069060.162330.13543
PBCS0.378900.135490.071020.176320.14098
Character BudgetingROUGE-1 ROUGE-2 ROUGE-W ROUGE-L ROUGE-SU4
Baseline(EN)0.336020.105460.052630.154370.12161
Baseline(CN)0.340750.120120.056780.157360.11981
PBES0.354830.119020.056420.158990.11205
CoRank (reimplemented)0.361470.123050.058470.169620.13364
Baseline(ENcomp)0.366540.129600.065030.159870.13421
PBCS0.378420.134410.070050.169280.13985
", "text": "Results of word-based ROUGE evaluation", "html": null }, "TABREF2": { "type_str": "table", "num": null, "content": "
SystemGRNRRCTFSC
CoRank 3.00\u00b10.75 3.35\u00b10.57 3.55\u00b10.82 3.90\u00b10.79 3.55\u00b10.74
PBES2.90\u00b10.89 3.25\u00b10.70 3.50\u00b10.87 3.96\u00b10.80 3.45\u00b10.50
PBCS2.90\u00b10.83 3.60\u00b10.49 3.75\u00b10.82 3.93\u00b10.68 3.40\u00b10.58
Human4.60\u00b10.49 4.15\u00b10.73 4.35\u00b10.73 4.93\u00b10.25 3.90\u00b10.94
", "text": "Results of character-based ROUGE evaluation", "html": null }, "TABREF3": { "type_str": "table", "num": null, "content": "", "text": "\u51ef\u7279 \u5973\u58eb \u786c\u6717 \uff0c \u7d27\u6025\u670d\u52a1 \u5728\u4f5b\u7f57\u91cc\u8fbe\u5dde \u7684 \u6234\u5fb7 \u53bf\uff0c \u627f\u62c5\u4e86 \u98ce\u66b4 \u7684\u51b2\u51fb \u4e3b \u4efb \u4f30\u8ba1\uff0c \u5b89\u5fb7\u9c81 \u5df2\u7ecf \u9020\u6210 150\u4ebf \u7f8e\u5143 \u5230 200\u4ebf \u7f8e\u5143 \u7684\u635f\u5bb3 ( 75\u4ebf \u82f1\u9551 \uff0c 100\u4ebf \u82f1\u9551 ) \u3002 Ms Kate Hale, director of emergency services in Florida's Dade County, which bore the brunt of the storm, estimated that Andrew had already caused Dollars 15bn to Dollars 20bn (Pounds 7.5bn-Pounds 10bn) of damage.\u96e8\u679c\u98d3\u98ce \uff0c \u88ad\u51fb \u4e1c\u6d77\u5cb8 \u5728 1989\u5e749\u6708 \uff0c \u82b1\u8d39\u4e86 \u4fdd\u9669\u4e1a \u7ea6 42\u4ebf \u7f8e\u5143 \u3002 Hurricane Hugo, which hit the east coast in September 1989, cost the insurance industry about Dollars 4.2bn.\u7f8e\u56fd\u57ce\u5e02 \u6cbf \u58a8\u897f\u54e5\u6e7e\u7684 \u963f\u62c9\u5df4\u9a6c\u5dde \u5230\u5f97\u514b\u8428\u65af\u5dde \u4e1c\u90e8 \u662f \u5728 \u98ce\u66b4 \u624b\u8868 \u6628\u665a \u5b89 \u5fb7\u9c81 \u98d3\u98ce \u5411\u897f \u6a2a\u8de8 \u4f5b\u7f57\u91cc\u8fbe\u5dde\u5357\u90e8 \u5e2d\u5377 \u540e \uff0c\u9020\u6210 \u81f3\u5c11 \u516b\u4eba\u6b7b\u4ea1 \u548c\u4e25\u91cd\u7684 \u8d22 \u4ea7\u635f\u5931 \u3002 US CITIES along the Gulf of Mexico from Alabama to eastern Texas were on storm watch last night as Hurricane Andrew headed west after sweeping across southern Florida, causing at least eight deaths and severe property damage.\u8fc7\u53bb\u7684 \u4e25\u91cd \u98d3\u98ce \u7f8e\u56fd \uff0c\u96e8\u679c \uff0c \u88ad\u51fb \u5357\u5361\u7f57\u6765\u7eb3\u5dde \u4e8e1989\u5e74 \uff0c \u8017\u8d44 \u4ece \u4fdd\u9669 \u635f\u5931 \u884c\u4e1a 42\u4ebf \u7f8e\u5143 \uff0c\u4f46 \u9020\u6210\u7684 \u603b\u4f24\u5bb3 \u7684 \u4f30\u8ba1 60\u4ebf \u7f8e\u5143 \u548c 100\u4ebf \u7f8e\u5143 \u4e4b\u95f4 \u4e0d\u7b49 \u3002 The last serious US hurricane, Hugo, which struck South Carolina in 1989, cost the industry Dollars 4.2bn from insured losses, though estimates of the total damage caused ranged between Dollars 6bn and Dollars 10bn.\u6700\u521d\u7684 \u62a5\u9053\u79f0\uff0c \u81f3\u5c11\u6709\u4e00\u4eba \u5df2\u7ecf \u6b7b\u4ea1 \uff0c 75 \u4eba\u53d7\u4f24 \uff0c\u6570\u5343 \u53d6\u5f97 \u6cbf\u7740 \u8def\u6613\u65af\u5b89 \u90a3\u5dde\u6d77\u5cb8 \u65e0\u5bb6\u53ef\u5f52 \uff0c 14 \u8bc1\u5b9e \u5728\u4f5b\u7f57\u91cc\u8fbe\u5dde\u548c \u6b7b\u4ea1 \u4e09 \u5df4\u54c8\u9a6c\u7fa4\u5c9b \u540e \u3002 Initial reports said at least one person had died, 75 been injured and thousands made homeless along the Louisiana coast, after 14 confirmed deaths in Florida and three in the Bahamas.", "html": null } } } }