{ "paper_id": "J02-4006", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T02:44:03.081986Z" }, "title": "Using Hidden Markov Modeling to Decompose Human-Written Summaries", "authors": [ { "first": "Hongyan", "middle": [], "last": "Jing", "suffix": "", "affiliation": { "laboratory": "", "institution": "Lucent Technologies", "location": { "settlement": "Bell Laboratories" } }, "email": "hjing@research.bell" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Professional summarizers often reuse original documents to generate summaries. The task of summary sentence decomposition is to deduce whether a summary sentence is constructed by reusing the original text and to identify reused phrases. Specifically, the decomposition program needs to answer three questions for a given summary sentence: (1) Is this summary sentence constructed by reusing the text in the original document? (2) If so, what phrases in the sentence come from the original document? and (3) From where in the document do the phrases come? Solving the decomposition problem can lead to better text generation techniques for summarization. Decomposition can also provide large training and testing corpora for extraction-based summarizers. We propose a hidden Markov model solution to the decomposition problem. Evaluations show that the proposed algorithm performs well.", "pdf_parse": { "paper_id": "J02-4006", "_pdf_hash": "", "abstract": [ { "text": "Professional summarizers often reuse original documents to generate summaries. The task of summary sentence decomposition is to deduce whether a summary sentence is constructed by reusing the original text and to identify reused phrases. Specifically, the decomposition program needs to answer three questions for a given summary sentence: (1) Is this summary sentence constructed by reusing the text in the original document? (2) If so, what phrases in the sentence come from the original document? and (3) From where in the document do the phrases come? Solving the decomposition problem can lead to better text generation techniques for summarization. Decomposition can also provide large training and testing corpora for extraction-based summarizers. We propose a hidden Markov model solution to the decomposition problem. Evaluations show that the proposed algorithm performs well.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "We define a problem referred to as summary sentence decomposition. The goal of a decomposition program is to determine the relations between phrases in a summary and phrases in the corresponding original document. Our analysis of a set of humanwritten summaries has indicated that professional summarizers often rely on cutting and pasting text from the original document to produce summaries. Unlike most current automatic summarizers, however, which extract sentences or paragraphs without any modification, professional summarizers edit the extracted text using a number of revision operations.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "Decomposition of human-written summaries involves analyzing a summary sentence to determine how it is constructed by humans. Specifically, we define the summary sentence decomposition problem as follows: Given a human-written summary sentence, a decomposition program needs to answer three questions: (1) Is this summary sentence constructed by reusing the text in the original document? (2) If so, what phrases in the sentence come from the original document? and (3) From where in the document do the phrases come? Here, the term phrase refers to any sentence component that is cut from the original document and reused in the summary. A phrase can be at any granularity, from a single word to a complicated verb phrase to a complete sentence.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "There are two primary benefits of solving the summary sentence decomposition problem. First, decomposition can lead to better text generation techniques in summarization. Most domain-independent summarizers rely on simple extraction to produce summaries, even though extracted sentences can be incoherent, redundant, or misleading. By decomposing human-written sentences, we can deduce how summary sen-tences are constructed by humans. By learning how humans use revision operations to edit extracted sentences, we can develop automatic programs to simulate these revision operations and build a better text generation system for summarization. Second, the decomposition result also provides large corpora for extraction-based summarizers. By aligning summary sentences with original-document sentences, we can automatically annotate the most important sentences in an input document. By doing this automatically, we can afford to mark content importance for a large set of documents, thereby providing valuable training and testing data sets for extraction-based summarizers.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "We propose a hidden Markov model solution to the summary sentence decomposition problem. In the next section, we show by example the revision operations used by professional summarizers. In Section 3, we present our solution to the decomposition problem by first mathematically formulating the decomposition problem and then presenting the Hidden Markov Model. In Section 4, we present three evaluation experiments and their results. Section 5 describes applications, and Section 6 discusses related work.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "We analyzed a set of articles to observe how they were summarized by human abstractors. This set included 15 news articles on telecommunications, 5 articles on medical issues, and 10 articles in the legal domain. Although individual articles related to specific domains, they covered a broad range of topics and differed in writing style and structure even within the same domain. The telecommunications articles were collected using the free daily news service Communications-Related Headlines provided by the Benton Foundation http://www.benton.org . The abstracts of these articles from various newspapers were written by staff writers at Benton. The medical news articles were collected from HIV/STD/TB Prevention News Update, provided by the Center for Disease Control (CDC) http://www.cdcnpin.org/news/prevnews.htm . As a public service, CDC provides daily staff-written synopses of key scientific articles and lay media reports on HIV/AIDS. The legal articles from the New York Law Journal describe court decisions on lawsuits that have been summarized by the journal's editors.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Revision Operations", "sec_num": "2." }, { "text": "From the corpus studied, we found that human abstractors almost universally reuse text in the original document for producing a summary of that document. This finding is consistent with Endres-Niggemeyer et al. (1998) , which stated that professional abstractors often rely on cutting and pasting the original text to produce summaries.", "cite_spans": [ { "start": 186, "end": 217, "text": "Endres-Niggemeyer et al. (1998)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Revision Operations", "sec_num": "2." }, { "text": "Based on careful analysis of human-written summaries, we have defined six revision operations that can be used to transform a sentence in an article into a summary sentence in a human-written abstract: sentence reduction, sentence combination, syntactic transformation, lexical paraphrasing, generalization or specification, and reordering. The following sections examine each of these operations in turn.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Revision Operations", "sec_num": "2." }, { "text": "Sentence reduction. In sentence reduction, nonessential phrases are removed from a sentence, as in the following example (italics in the source sentence mark material that is removed): 1 Document sentence: When it arrives sometime next year in new TV sets, the V-chip will give parents a new and potentially revolu-tionary device to block out programs they don't want their children to see. Summary sentence: The V-chip will give parents a device to block out programs they don't want their children to see.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "1.", "sec_num": null }, { "text": "The deleted material can be at any granularity: a word, a phrase, or a clause. Multiple components can be removed from a single sentence.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "1.", "sec_num": null }, { "text": "Sentence combination. In sentence combination, material from a few sentences is merged into a single sentence. This operation is typically used together with sentence reduction, as illustrated in the following example, which also employs paraphrasing (italics in the source sentences mark material that is removed; italics in the summary sentence mark material that is added): Document sentence 1: But it also raises serious questions about the privacy of such highly personal information wafting about the digital world. Document sentence 2: The issue thus fits squarely into the broader debate about privacy and security on the Internet, whether it involves protecting credit card numbers or keeping children from offensive information. Summary sentence: But it also raises the issue of privacy of such personal information and this issue hits the nail on the head in the broader debate about privacy and security on the Internet.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "2.", "sec_num": null }, { "text": "Syntactic transformation. Syntactic transformation involves changing the syntactic structure of a sentence. In both sentence reduction and sentence combination, syntactic transformations may also be involved. In the following example, the sentence structure was changed from the causative clause structure in the original to the conjunctive structure in the summary. The subject of the causative clause and the subject of the main clause were combined during this operation. Document sentence: Since annoy.com enables visitors to send unvarnished opinions to political and other figures in the news, the company was concerned that its activities would be banned by the statute. Summary sentence: Annoy.com enables visitors to send unvarnished opinions to political and other figures in the news and feared the law could put them out of business.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "3.", "sec_num": null }, { "text": "Lexical paraphrasing. In lexical paraphrasing, phrases are replaced with their paraphrases. For instance, in the example in item (2), the summary sentences substituted fit squarely into with a more picturesque description hits the nail on the head.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "4.", "sec_num": null }, { "text": "Generalization or specification. In generalization (specification), phrases or clauses are replaced with more general (specific) descriptions, as in the Not all revision operations are listed here, because some operations are used infrequently. Note that multiple revision operations are often involved in order to produce a single summary sentence.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "5.", "sec_num": null }, { "text": "In human-written abstracts, some sentences are not based on cut and paste but are written from scratch. The main criterion we used to distinguish a sentence that was cut and pasted from a sentence written from scratch was whether more than half the words in a summary sentence were composed of phrases borrowed from the original document, in which case the sentence was considered to have been constructed by cut and paste; otherwise, it was considered to have been written from scratch. 2", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "5.", "sec_num": null }, { "text": "To answer the three questions of the decomposition problem is difficult. Because the phrases that are borrowed from the original document can be at any granularity, determining phrase boundaries is not easy. Determining the origin of a phrase is also difficult, since the phrase may occur multiple times in the document in slightly different forms. Moreover, multiple revision operations may have been performed on the reused text. The resulting summary sentence can therefore differ significantly from the source document sentences from which it has been developed. All these factors complicate the decomposition problem.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Using a Hidden Markov Model for Decomposition", "sec_num": "3." }, { "text": "We propose a hidden Markov model (HMM) (Baum 1972 ) solution to the decomposition problem. The model has three steps. First, we formulate the decomposition problem as an equivalent problem; that is, for each word in a summary sentence, we identify a document position as its likely source. This step is important, since only after this transformation can we apply the HMM to solve the problem. Second, we build the HMM on a set of general heuristic rules observed from the text-reusing practice of humans. Although this is unconventional in applications that use HMMs, we believe it is appropriate in our particular application. Evaluations show that this unconventional HMM is effective for decomposition. In the last step, a dynamic programming technique, the Viterbi algorithm (Viterbi 1967) , is used to find the most likely document position for each word in a summary sentence and the best decomposition for the sentence.", "cite_spans": [ { "start": 39, "end": 49, "text": "(Baum 1972", "ref_id": "BIBREF0" }, { "start": 780, "end": 794, "text": "(Viterbi 1967)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Using a Hidden Markov Model for Decomposition", "sec_num": "3." }, { "text": "We first mathematically formulate the summary sentence decomposition problem. An input summary sentence can be represented as a word sequence: (I 1 , . . . , I N ), where I 1 is the first word of the sentence and I N is the last word. The position of a word in a document can be uniquely represented by the sentence position and the word position within the sentence: (SNUM, WNUM). For example, (4, 8) uniquely refers to the eighth word in the fourth sentence. Multiple occurrences of a word in the document can be represented by a set of word positions:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Formulating the Problem", "sec_num": "3.1" }, { "text": "{(SNUM 1 , WNUM 1 ), . . . , (SNUM m , WNUM m )}.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Formulating the Problem", "sec_num": "3.1" }, { "text": "Using the above notation, we formulate the decomposition problem as follows: Given a word sequence (I 1 , . . . , I N ) and the positions {(SNUM 1 , WNUM 1 ), . . . , (SNUM M , WNUM M )} for each word in the sequence, determine the most likely document position for each word.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Formulating the Problem", "sec_num": "3.1" }, { "text": "Through this formulation, we transform the difficult tasks of identifying phrase boundaries and determining phrase origins into the problem of finding a most likely document position for each word. As shown in Figure 1 , when a position has been chosen for each word in the summary sequence, we obtain a sequence of positions. For example, ((0,21), (2,40), (2,41), (0,31)) is our position sequence when the first occurrence of the same word in the document has been chosen for every summary word; ((0,26), (2,40), (2,41), (0,31)) is another position sequence. Every time a different position is chosen for a summary word, we obtain a different position sequence. The word the in the sequence occurs 44 times in the document, communication occurs once, subcommittee occurs twice, and of occurs 22 times. This four-word sequence therefore has a total of 1,936 (44 \u00d7 1 \u00d7 2 \u00d7 22) possible position sequences. 3 Morphological analysis or stemming can be performed to associate morphologically related words, but it is optional. In our experiments, applying stemming improved system performance when the human-written summaries included many words that were morphological variants of original-document words. Many human-written summaries in our experiments, however, contained few cases of morphological transformation of words and phrases borrowed from original documents, so stemming did not improve the performance for these summaries.", "cite_spans": [], "ref_spans": [ { "start": 210, "end": 218, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Formulating the Problem", "sec_num": "3.1" }, { "text": "Finding the most likely document position for each word is equivalent to finding the most likely position sequence among all possible position sequences. For the example in Figure 1 , the most likely position sequence should be ((2,39), (2,40), (2,41), (2,42)); that is, the fragment comes from document sentence 2 and its position within the sentence is word number 39 to word number 42. How can we automatically find this sequence, however, among 1,936 possible sequences?", "cite_spans": [], "ref_spans": [ { "start": 173, "end": 181, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Formulating the Problem", "sec_num": "3.1" }, { "text": "The exact document position from which a word in a summary comes depends on the word positions surrounding it. Using the bigram model, we assume that the probability of a word's coming from a certain position in the document depends only on the word directly before it in the sequence. Suppose I i and I i+1 are two adjacent words in a summary sentence and I i is before I i+1 . We use PROB(I i+1 = (S 2 , W 2 ) | I i = (S 1 , W 1 )) to represent the probability that I i+1 comes from sentence number S 2 and word number W 2 of the document when I i comes from sentence number S 1 and word number W 1 .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Hidden Markov Model", "sec_num": "3.2" }, { "text": "To decompose a summary sentence, we must consider how humans are likely to generate it; we draw here on the revision operations discussed in section 2. Two The sequences of positions in summary sentence decomposition.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Hidden Markov Model", "sec_num": "3.2" }, { "text": "general heuristic rules can be safely assumed: First, humans are more likely to cut phrases than single, isolated words; second, humans are more likely to combine nearby sentences into a single sentence than those far apart. These two rules guide us in the decomposition process. We translate the heuristic rules into the bigram probability PROB(", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Hidden Markov Model", "sec_num": "3.2" }, { "text": "I i+1 = (S 2 , W 2 ) | I i = (S 1 , W 1 ))", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Hidden Markov Model", "sec_num": "3.2" }, { "text": ", where I i , I i+1 represent two adjacent words in the input summary sentence (abbreviated henceforth as PROB(I i+1 | I i )). The values of PROB(I i+1 | I i ) are assigned as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Hidden Markov Model", "sec_num": "3.2" }, { "text": "\u2022 If ((S 1 = S 2 ) and (W 1 = W 2 \u2212 1)) (i.e., words in two adjacent positions in the document), then PROB(I i+1 | I i ) is assigned the maximal value P1. For example, PROB((subcommittee = (2, 41) | communications = (2, 40)) in Figure 1 will be assigned the maximal value. (Rule: Two adjacent words in a summary are most likely to come from two adjacent words in the document.)", "cite_spans": [], "ref_spans": [ { "start": 228, "end": 236, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "The Hidden Markov Model", "sec_num": "3.2" }, { "text": "\u2022 If ((S 1 = S 2 ) and (W 1 < W 2 \u2212 1)), then PROB(I i+1 | I i ) is assigned the second-highest value P2. For example, PROB(of = (4, 16) | subcommittee = (4, 1)) will be assigned a high probability. (Rule: Adjacent words in a summary are highly likely to come from the same sentence in the document, retaining their relative order, as in the case of sentence reduction. This rule can be further refined by adding restrictions on distance between words.)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Hidden Markov Model", "sec_num": "3.2" }, { "text": "\u2022 If ((S 1 = S 2 ) and (W 1 > W 2 )), then PROB(I i+1 | I i ) is assigned the third-highest value P3. For example, PROB(of = (2, 30) | subcommittee = (2, 41)). (Rule: Adjacent words in a summary can come from the same sentence in the document but change their relative order. For example, a subject can be moved from the end of the sentence to the front, as in syntactic transformation.)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Hidden Markov Model", "sec_num": "3.2" }, { "text": "\u2022 If (S 2 \u2212 CONST < S 1 < S 2 ), then PROB(I i+1 | I i )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Hidden Markov Model", "sec_num": "3.2" }, { "text": "is assigned the fourth-highest value P4. For example, PROB(of = (3, 5) | subcommittee = (2, 41)). (Rule: Adjacent words in a summary can come from nearby sentences in the document and retain their relative order, such as in sentence combination. CONST is a small constant such as 3 or 5.)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Hidden Markov Model", "sec_num": "3.2" }, { "text": "\u2022 If (S 2 < S 1 < S 2 + CONST ), then PROB(I i+1 | I i )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Hidden Markov Model", "sec_num": "3.2" }, { "text": "is assigned the fifth-highest value P5. For example, PROB(of = (1, 10) | subcommittee = (2, 41)). (Rule: Adjacent words in a summary can come from nearby sentences in the document but reverse their relative orders.)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Hidden Markov Model", "sec_num": "3.2" }, { "text": "\u2022 If (|S 2 \u2212 S 1 | >= CONST ), then PROB(I i+1 | I i )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Hidden Markov Model", "sec_num": "3.2" }, { "text": "is assigned the smallest value P6. For example, PROB(of = (23, 43) | subcommittee = (2, 41)). (Rule: Adjacent words in a summary are not very likely to come from sentences far apart.) Figure 2 shows a graphical representation of the above rules for assigning bigram probabilities. The nodes in the figure represent possible positions in the document, and the edges output the probability of moving from one node to another. These bigram probabilities are used to find the most likely position sequence in the next step. Assigning values to P1-P6 is experimental. In our experiments, the maximal value is assigned 1 and others are usually assigned evenly decreasing values: 0.9, 0.8, and so on. These values, however, can be experimentally adjusted for different corpora. We decide the approximate optimal values of P1-P6 by testing different values for P1-P6 and choosing the values that give the best performance in the tests. Figure 2 is considered a very abstract representation of our HMM for decomposition. Each word position in the figure represents a state in the HMM. For example, (S, W) is a state, and (S, W + 1) is another state. Note that (S, W) and (S, W + 1) are relative values; the S and W in the state (S, W) have different values based on the", "cite_spans": [], "ref_spans": [ { "start": 184, "end": 192, "text": "Figure 2", "ref_id": null }, { "start": 928, "end": 936, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "The Hidden Markov Model", "sec_num": "3.2" }, { "text": "(S,W) (S,W+1) (S,W+n) (n>=1) (n>=2) (S+i,W+j) (S+i,W+j) i>=CONST (S\u2212i,W+j) (S\u2212i,W+j) i>=CONST Sentence (S\u2212CONST) Sentence (S+CONST) P1 P2 P3 P6 P6 0< i