{ "paper_id": "N03-1024", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T14:07:16.218037Z" }, "title": "Syntax-based Alignment of Multiple Translations: Extracting Paraphrases and Generating New Sentences", "authors": [ { "first": "Bo", "middle": [], "last": "Pang", "suffix": "", "affiliation": { "laboratory": "", "institution": "Cornell University Ithaca", "location": { "postCode": "14853", "region": "NY", "country": "USA" } }, "email": "" }, { "first": "Kevin", "middle": [], "last": "Knight", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Southern", "location": { "addrLine": "California Marina Del Rey", "postCode": "90292", "region": "CA", "country": "USA" } }, "email": "knight@isi.edu" }, { "first": "Daniel", "middle": [], "last": "Marcu", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Southern", "location": { "addrLine": "California Marina Del Rey", "postCode": "90292", "region": "CA", "country": "USA" } }, "email": "marcu@isi.edu" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "We describe a syntax-based algorithm that automatically builds Finite State Automata (word lattices) from semantically equivalent translation sets. These FSAs are good representations of paraphrases. They can be used to extract lexical and syntactic paraphrase pairs and to generate new, unseen sentences that express the same meaning as the sentences in the input sets. Our FSAs can also predict the correctness of alternative semantic renderings, which may be used to evaluate the quality of translations.", "pdf_parse": { "paper_id": "N03-1024", "_pdf_hash": "", "abstract": [ { "text": "We describe a syntax-based algorithm that automatically builds Finite State Automata (word lattices) from semantically equivalent translation sets. These FSAs are good representations of paraphrases. They can be used to extract lexical and syntactic paraphrase pairs and to generate new, unseen sentences that express the same meaning as the sentences in the input sets. Our FSAs can also predict the correctness of alternative semantic renderings, which may be used to evaluate the quality of translations.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "In the past, paraphrases have come under the scrutiny of many research communities. Information retrieval researchers have used paraphrasing techniques for query reformulation in order to increase the recall of information retrieval engines (Sparck Jones and Tait, 1984) . Natural language generation researchers have used paraphrasing to increase the expressive power of generation systems (Iordanskaja et al., 1991; Lenke, 1994; Stede, 1999) . And researchers in multi-document text summarization (Barzilay et al., 1999) , information extraction (Shinyama et al., 2002) , and question answering (Lin and Pantel, 2001; Hermjakob et al., 2002) have focused on identifying and exploiting paraphrases in the context of recognizing redundancies, alternative formulations of the same meaning, and improving the performance of question answering systems.", "cite_spans": [ { "start": 259, "end": 270, "text": "Tait, 1984)", "ref_id": "BIBREF14" }, { "start": 391, "end": 417, "text": "(Iordanskaja et al., 1991;", "ref_id": "BIBREF8" }, { "start": 418, "end": 430, "text": "Lenke, 1994;", "ref_id": "BIBREF10" }, { "start": 431, "end": 443, "text": "Stede, 1999)", "ref_id": "BIBREF15" }, { "start": 499, "end": 522, "text": "(Barzilay et al., 1999)", "ref_id": "BIBREF4" }, { "start": 548, "end": 571, "text": "(Shinyama et al., 2002)", "ref_id": "BIBREF13" }, { "start": 597, "end": 619, "text": "(Lin and Pantel, 2001;", "ref_id": "BIBREF11" }, { "start": 620, "end": 643, "text": "Hermjakob et al., 2002)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In previous work (Barzilay and McKeown, 2001; Lin and Pantel, 2001; Shinyama et al., 2002) , paraphrases are represented as sets or pairs of semantically equivalent words, phrases, and patterns. Although this is adequate in the context of some applications, it is clearly too weak from a generative perspective. Assume, for example, that we know that text pairs (stock market rose, stock market gained) and (stock market rose, stock prices rose) have the same meaning. If we memorized only these two pairs, it would be impossible to infer that, in fact, consistent with our intuition, any of the following sets of phrases are also semantically equivalent: {stock market rose, stock market gained, stock prices rose, stock prices gained } and {stock market, stock prices } in the context of rose or gained; {market rose }, {market gained }, {prices rose } and {prices gained } in the context of stock; and so on.", "cite_spans": [ { "start": 17, "end": 45, "text": "(Barzilay and McKeown, 2001;", "ref_id": "BIBREF3" }, { "start": 46, "end": 67, "text": "Lin and Pantel, 2001;", "ref_id": "BIBREF11" }, { "start": 68, "end": 90, "text": "Shinyama et al., 2002)", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this paper, we propose solutions for two problems: the problem of paraphrase representation and the problem of paraphrase induction. We propose a new, finite-statebased representation of paraphrases that enables one to encode compactly large numbers of paraphrases. We also propose algorithms that automatically derive such representations from inputs that are now routinely released in conjunction with large scale machine translation evaluations (DARPA, 2002) : multiple English translations of many foreign language texts. For instance, when given as input the 11 semantically equivalent English translations in Figure 1 , our algorithm automatically induces the FSA in Figure 2 , which represents compactly 49 distinct renderings of the same semantic meaning. Our FSAs capture both lexical paraphrases, such as {fighting, bat-tle}, {died, were killed} and structural paraphrases such as {last week's fighting, the battle of last week}. The contexts in which these are correct paraphrases are also conveniently captured in the representation.", "cite_spans": [ { "start": 451, "end": 464, "text": "(DARPA, 2002)", "ref_id": "BIBREF6" } ], "ref_spans": [ { "start": 618, "end": 626, "text": "Figure 1", "ref_id": null }, { "start": 676, "end": 684, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In previous work, Langkilde and Knight (1998) used word lattices for language generation, but their method involved hand-crafted rules. Bangalore et al. (2001) and Barzilay and Lee (2002) both applied the technique of multi-sequence alignment (MSA) to align parallel corpora and produced similar FSAs. For their purposes, they mainly need to ensure the correctness of consensus among different translations, so that different constituent orderings in input sentences do not pose a serious prob- lem. In contrast, we want to ensure the correctness of all paths represented by the FSAs, and direct application of MSA in the presence of different constituent orderings can be problematic. For example, when given as input the same sentences in Figure 1 , one instantiation of the MSA algorithm produces the FSA in Figure 3 , which contains many \"bad\" paths such as the (Barzilay and Lee (2003) ). But we chose to approach this problem from another direction. As a result, we propose a new syntax-based algorithm to produce FSAs.", "cite_spans": [ { "start": 18, "end": 45, "text": "Langkilde and Knight (1998)", "ref_id": "BIBREF9" }, { "start": 136, "end": 159, "text": "Bangalore et al. (2001)", "ref_id": "BIBREF0" }, { "start": 164, "end": 187, "text": "Barzilay and Lee (2002)", "ref_id": "BIBREF1" }, { "start": 866, "end": 890, "text": "(Barzilay and Lee (2003)", "ref_id": "BIBREF2" } ], "ref_spans": [ { "start": 741, "end": 749, "text": "Figure 1", "ref_id": null }, { "start": 811, "end": 819, "text": "Figure 3", "ref_id": null } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this paper, we first introduce the multiple translation corpus that we use in our experiments (see Section 2). We then present the algorithms that we developed to induce finite-state paraphrase representations from such data (see Section 3). An important part of the paper is dedicated to evaluating the quality of the finite-state representations that we derive (see Section 4). Since our representations encode thousands and sometimes millions of equivalent verbalizations of the same meaning, we use both manual and automatic evaluation techniques. Some of the automatic evaluations we perform are novel as well.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The data we use in this work is the LDC-available Multiple-Translation Chinese (MTC) Corpus 1 developed for machine translation evaluation, which contains 105 news stories (993 sentences) from three sources of journalistic Mandarin Chinese text. These stories were independently translated into English by 11 translation agencies. Each sentence group, which consists of 11 semantically equivalent translations, is a rich source for learning lexical and structural paraphrases. In our experiments, we use 899 of the sentence groups -the sentence groups with sentences longer than 45 words were dropped.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data", "sec_num": "2" }, { "text": "Our syntax-based alignment algorithm, whose pseudocode is shown in Figure 4 , works in three steps. In the first step (lines 1-5 in Figure 4 ), we parse every sentence in a sentence group and merge all resulting parse trees into a parse forest. In the second step (line 6), we extract 1. ParseForest = 2. foreach s \u2208 SentenceGroup 3. t = parseTree(s); 4. ParseForest = Merge(ParseForest, t); 5. endfor 6. Extract FSA from ParseForest; 7. Squeeze FSA; Figure 4 : The Syntax-Based Alignment Algorithm.", "cite_spans": [], "ref_spans": [ { "start": 67, "end": 75, "text": "Figure 4", "ref_id": null }, { "start": 132, "end": 140, "text": "Figure 4", "ref_id": null }, { "start": 451, "end": 459, "text": "Figure 4", "ref_id": null } ], "eq_spans": [], "section": "A Syntax-Based Alignment Algorithm", "sec_num": "3" }, { "text": "an FSA from the parse forest and then we compact it further using a limited form of bottom-up alignment, which we call squeezing (line 7). In what follows, we describe each step in turn.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A Syntax-Based Alignment Algorithm", "sec_num": "3" }, { "text": "Top-down merging. Given a sentence group, we pass each of the 11 sentences to Charniak's (2000) parser to get 11 parse trees. The first step in the algorithm is to merge these parse trees into one parse-forest-like structure using a top-down process.", "cite_spans": [ { "start": 78, "end": 95, "text": "Charniak's (2000)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "A Syntax-Based Alignment Algorithm", "sec_num": "3" }, { "text": "Let's consider a simple case in which the parse forest contains one single tree, Tree 1 in Figure 5 , and we are adding Tree 2 to it. Since the two trees correspond to sentences that have the same meaning and since both trees expand an S node into an N P and a V P , it is reasonable to assume that N P 1 is a paraphrase of N P 2 and V P 1 is a paraphrase of V P 2 . We merge N P 1 with N P 2 and V P 1 with V P 2 and continue the merging process on each of the subtrees recursively, until we either reach the leaves of the trees or the two nodes that we examine are expanded using different syntactic rules.", "cite_spans": [], "ref_spans": [ { "start": 91, "end": 99, "text": "Figure 5", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "A Syntax-Based Alignment Algorithm", "sec_num": "3" }, { "text": "When we apply this process to the trees in Figure 5 , the N P nodes are merged all the way down to the leaves, and we get \"12\" as a paraphrase of \"twelve\" and \"people\" as a paraphrase of \"persons\"; in contrast, the two V P s are expanded in different ways, so no merging is done beyond this level, and we are left with the information that \"were killed\" is a paraphrase of \"died\".", "cite_spans": [], "ref_spans": [ { "start": 43, "end": 51, "text": "Figure 5", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "A Syntax-Based Alignment Algorithm", "sec_num": "3" }, { "text": "We repeat this top-down merging procedure with each of the 11 parse trees in a sentence group. So far, only constituents with same syntactic type are treated as paraphrases. However, later we shall see that we can match word spans whose syntactic types differ.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A Syntax-Based Alignment Algorithm", "sec_num": "3" }, { "text": "Keyword checking. The matching process described above appears quite strict -the expansions must match exactly for two nodes to be merged. But consider the following parse trees: 1.(S (N P 1 people)(V P 1 were killed in this battle)) 2.(S (N P 2 this battle)(V P 2 killed people)) If we applied the algorithm described above, we would mistakenly align N P 1 with N P 2 and V P 1 with V P 2the algorithm described so far makes no use of lexical ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A Syntax-Based Alignment Algorithm", "sec_num": "3" }, { "text": "To prevent such erroneous alignments, we also implement a simple keyword checking procedure. We note that since the word \"battle\" appears in both V P 1 and N P 2 , this can serve as an evidence against the merging of (N P 1 , N P 2 ) and (V P 1 , V P 2 ). A similar argument can be constructed for the word \"people\". So in this example we actually have double evidence against merging; in general, one such clue suffices to stop the merging.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "information.", "sec_num": null }, { "text": "Our keyword checking procedure acts as a filter. A list of keywords is maintained for each node in a syntactic tree. This list contains all the nouns, verbs, and adjectives that are spanned by a syntactic node. Before merging two nodes, we check to see whether the keyword lists associated with them share words with other nodes. That is, supposed we just merged nodes A and B, and they are expanded with the same syntactic rule into A 1 A 2 ...A n and B 1 B 2 ...B n respectively; before we merge each A i with B i , we check for each B i if its keyword list shares common words with any A j (j = i). If they do not, we continue the top-down merging process; otherwise we stop. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "information.", "sec_num": null }, { "text": "The process of mapping Parse Forests into Finite State Automata is simple. We simply traverse the parse forest top-down and create alternative paths for every merged node. For example, the parse forest in Figure 5 is mapped into the FSA shown at the bottom of the same figure. In the FSA, there is a word associated with each edge. Different paths between any two nodes are assumed to be paraphrases of each other. Each path that starts from the BEGIN node and ends at the EN D node corresponds to either an original input sentence or a paraphrase sentence.", "cite_spans": [], "ref_spans": [ { "start": 205, "end": 213, "text": "Figure 5", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Mapping Parse Forests into Finite State Automata.", "sec_num": null }, { "text": "Squeezing. Since we adopted a very strict matching criterion in top-down merging, a small difference in the syntactic structure of two trees prevents some legitimate mergings from taking place. This behavior is also exacerbated by errors in syntactic parsing. Hence, for instance, three edges labeled detroit at the leftmost of the top FSA in Figure 6 were kept apart. To compensate for this effect, our algorithm implements an additional step, which we call squeezing. If two different edges that go into (or out of) the same node in an FSA are labeled with the same word, the nodes on the other end of the edges are merged. We apply this operation exhaustively over the FSAs produced by the top-down merging procedure. Figure 6 illustrates the effect of this operation: the FSA at the top of this figure is compressed into the more compact FSA shown at the bottom of it. Note that in addition to reducing the redundant edges, this also gives us paraphrases not available in the FSA before squeezing (e.g. {reduced to rubble, blasted to ground}). Therefore, the squeezing operation, which implements a limited form of lexically driven alignment similar to that exploited by MSA algorithms, leads to FSAs that have a larger number of paths and paraphrases.", "cite_spans": [], "ref_spans": [ { "start": 343, "end": 351, "text": "Figure 6", "ref_id": null }, { "start": 721, "end": 729, "text": "Figure 6", "ref_id": null } ], "eq_spans": [], "section": "Mapping Parse Forests into Finite State Automata.", "sec_num": null }, { "text": "The evaluation for our finite state representations and algorithm requires careful examination. Obviously, what counts as a good result largely depends on the application one has in mind. If we are extracting paraphrases for question-reformulation, it doesn't really matter if we output a few syntactically incorrect paraphrases, as long as we produce a large number of semantically correct ones. If we want to use the FSA for MT evaluation (for example, comparing a sentence to be evaluated with the possible paths in FSA), we would want all paths to be relatively good (which we will focus on in this paper), while in some other applications, we may only care about the quality of the best path (not addressed in this paper). Section 4.1 concentrates on evaluating the paraphrase pairs that can be extracted from the FSAs built by our system, while Section 4.2 is dedicated to evaluating the FSAs directly.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation", "sec_num": "4" }, { "text": "By construction, different paths between any two nodes in the FSA representations that we derive are paraphrases (in the context in which the nodes occur). To evaluate our algorithm, we extract paraphrases from our FSAs and ask human judges to evaluate their correctness. We compare the paraphrases we collect with paraphrases that are derivable from the same corpus using a cotraining-based paraphrase extraction algorithm (Barzilay and McKeown, 2001 ). To the best of our knowledge, this is the most relevant work to compare against since it aims at extracting paraphrase pairs from parallel corpus. Unlike our syntax-based algorithm which treats a sentence as a tree structure and uses this hierarchical structural information to guide the merging process, their algorithm treats a sentence as a sequence of phrases with surrounding contexts (no hierarchical structure involved) and cotrains classifiers to detect paraphrases and contexts for paraphrases. It would be interesting to compare the results from two algorithms so different from each other.", "cite_spans": [ { "start": 424, "end": 451, "text": "(Barzilay and McKeown, 2001", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Human-based evaluation of paraphrases", "sec_num": "4.1.1" }, { "text": "For the purpose of this experiment, we randomly selected 300 paraphrase pairs (S syn ) from the FSAs produced by our system. Since the co-training-based algorithm of Barzilay and McKeown (2001) takes parallel corpus as input, we created out of the MTC corpus 55 \u00d7 993 sentence pairs (Each equivalent translation set of cardinality 11 was mapped into 11 2 equivalent translation pairs.). Regina Barzilay kindly provided us the list of paraphrases extracted by their algorithm from this parallel corpus, from which we randomly selected another set of 300 paraphrases (S cotr ). Table 1 : A comparison of the correctness of the paraphrases produced by the syntax-based alignment (S syn ) and co-training-based (S cotr ) algorithms.", "cite_spans": [ { "start": 166, "end": 193, "text": "Barzilay and McKeown (2001)", "ref_id": "BIBREF3" } ], "ref_spans": [ { "start": 576, "end": 583, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "Human-based evaluation of paraphrases", "sec_num": "4.1.1" }, { "text": "The resulting 600 paraphrase pairs were mixed and presented in random order to four human judges. Each judge was asked to assess the correctness of 150 paraphrase pairs (75 pairs from each system) based on the context, i.e., the sentence group, from which the paraphrase pair was extracted. Judges were given three choices: \"Correct\", for perfect paraphrases, \"Partially correct\", for paraphrases in which there is only a partial overlap between the meaning of two paraphrases (e.g. while {saving set, aid package} is a correct paraphrase pair in the given context, {set, aide package} is considered partially correct), and \"Incorrect\". The results of the evaluation are presented in Table 1 .", "cite_spans": [], "ref_spans": [ { "start": 684, "end": 691, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "Human-based evaluation of paraphrases", "sec_num": "4.1.1" }, { "text": "Although the four evaluators were judging four different sets, each clearly rated a higher percentage of the outputs produced by the syntax-based alignment algorithm as \"Correct\". We should note that there are parameters specific to the co-training algorithm that we did not tune to work for this particular corpus. In addition, the cotraining algorithm recovered more paraphrase pairs: the syntax-based algorithm extracted 8666 pairs in total with 1051 of them extracted at least twice (i.e. more or less reliable), while the numbers for the co-training algorithm is 2934 out of a total of 16993 pairs. This means we are not comparing the accuracy on the same recall level.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Human-based evaluation of paraphrases", "sec_num": "4.1.1" }, { "text": "Aside from evaluating the correctness of the paraphrases, we are also interested in the degree of overlap between the paraphrase pairs discovered by the two algorithms so different from each other. We find that out of the 1051 paraphrase pairs that were extracted from more than one sentence group by the syntax-based algorithm, 62.3% were also extracted by the co-training algorithm; and out of the 2934 paraphrase pairs from the results of co-training algorithm, 33.4% were also extracted by the syntax-based algorithm. This shows that in spite of the very different cues the two different algorithms rely on, range of ASL 1-10 10-20 20-30 30-45 recall 30.7% 16.3% 7.8% 3.8% Table 2 : Recall of WordNet-consistent synonyms.", "cite_spans": [], "ref_spans": [ { "start": 677, "end": 684, "text": "Table 2", "ref_id": null } ], "eq_spans": [], "section": "Human-based evaluation of paraphrases", "sec_num": "4.1.1" }, { "text": "they do discover a lot of common pairs.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Human-based evaluation of paraphrases", "sec_num": "4.1.1" }, { "text": "In order to (roughly) estimate the recall (of lexical synonyms) of our algorithm, we use the synonymy relation in WordNet to extract all the synonym pairs present in our corpus. This extraction process yields the list of all WordNet-consistent synonym pairs that are present in our data. (Note that some of the pairs identified as synonyms by WordNet, like \"follow/be\", are not really synonyms in the contexts defined in our data set, which may lead to artificial deflation of our recall estimate.) Once we have the list of WordNet-consistent paraphrases, we can check how many of them are recovered by our method. Table 2 gives the percentage of pairs recovered for each range of average sentence length (ASL) in the group.", "cite_spans": [], "ref_spans": [ { "start": 615, "end": 622, "text": "Table 2", "ref_id": null } ], "eq_spans": [], "section": "WordNet-based analysis of paraphrases", "sec_num": "4.1.2" }, { "text": "Not surprisingly, we get higher recall with shorter sentences, since long sentences tend to differ in their syntactic structures fairly high up in the parse trees, which leads to fewer mergings at the lexical level. The recall on the task of extracting lexical synonyms, as defined by WordNet, is not high. But after all, this is not what our algorithm has been designed for. It's worth noticing that the syntax-based algorithm also picks up many paraphrases that are not identified as synonyms in Word-Net. Out of 3217 lexical paraphrases that are learned by our system, only 493 (15.3%) are WordNet synonyms, which suggests that paraphrasing is a much richer and looser relation than synonymy. However, the WordNetbased recall figures suggest that WordNet can be used as an additional source of information to be exploited by our algorithm.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "WordNet-based analysis of paraphrases", "sec_num": "4.1.2" }, { "text": "We noted before that apart from being a natural representation of paraphrases, the FSAs that we build have their own merit and deserve to be evaluated directly. Since our FSAs contain large numbers of paths, we design automatic evaluation metrics to assess their qualities.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluating the FSA directly", "sec_num": "4.2" }, { "text": "If we take our claims seriously, each path in our FSAs that connects the start and end nodes should correspond to a well-formed sentence. We are interested in both quantity (how many sentences our automata are able to produce) and quality (how good these sentences are). To answer the first question, we simply count the number of paths produced by our FSAs. 1.74259 1.05749 Table 4 : Quality judged by LM Table 3 gives the statistics on the number of paths produced by our FSAs, reported by the average length of sentences in the input sentence groups. For example, the sentence groups that have between 10 and 20 words produce, on average, automata that can yield 4468 alternative, semantically equivalent formulations.", "cite_spans": [], "ref_spans": [ { "start": 375, "end": 382, "text": "Table 4", "ref_id": null }, { "start": 406, "end": 413, "text": "Table 3", "ref_id": "TABREF5" } ], "eq_spans": [], "section": "Language Model-based evaluation", "sec_num": "4.2.1" }, { "text": "Note that if we always get the same degree of merging per word across all sentence groups, the number of paths would tend to increase with the sentence length. This is not the case here. Apparently we are getting less merging with longer sentences. But still, given 11 sentences, we are capable of generating hundreds, thousands, and in some cases even millions of sentences.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Language Model-based evaluation", "sec_num": "4.2.1" }, { "text": "Obviously, we should not get too happy with our ability to boost the number of equivalent meanings if they are incorrect. To assess the quality of the FSAs generated by our algorithm, we use a language model-based metric.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Language Model-based evaluation", "sec_num": "4.2.1" }, { "text": "We train a 4-gram model over one year of the Wall Street Journal using the CMU-Cambridge Statistical Language Modeling toolkit (v2). For each sentence group SG, we use this language model to estimate the average entropy of the 11 original sentences in that group (ent(SG)). We also compute the average entropy of all the sentences in the corresponding FSA built by our syntax-based algorithm (ent(F SA)). As the statistics in Table 4 show, there is little difference between the average entropy of the original sentences and the average entropy of the paraphrase sentences we produce. To better calibrate this result, we compare it with the average entropy of 6 corresponding machine translation outputs (ent(M T S)), which were also made available by LDC in conjunction with the same corpus. As one can see, the difference between the average entropy of the machine produced output and the average entropy of the original 11 sentences is much higher than the difference between the average entropy of the FSA-produced outputs and the average entropy of the original 11 sentences. Obviously, this does not mean that our FSAs only produce well-formed sentences. But it does mean that our FSAs produce sentences that look more like human produced sentences than machine produced ones according to a language model.", "cite_spans": [], "ref_spans": [ { "start": 426, "end": 433, "text": "Table 4", "ref_id": null } ], "eq_spans": [], "section": "Language Model-based evaluation", "sec_num": "4.2.1" }, { "text": "Not surprisingly, the language model we used in Section 4.2.1 is far from being a perfect judge of sentence quality. Recall the example of \"bad\" path we gave in Section 1: the battle of last week's fighting took at least 12 people lost their people died in the fighting last week's fighting. Our 4-gram based language model will not find any fault with this sentence. Notice, however, that some words (such as \"fighting\" and \"people\") appear at least twice in this path, although they are not repeated in any of the source sentences. These erroneous repetitions indicate mis-alignment. By measuring the frequency of words that are mistakenly repeated, we can now examine quantitatively whether a direct application of the MSA algorithm suffers from different constituent orderings as we expected.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Word repetition analysis", "sec_num": "4.2.2" }, { "text": "For each sentence group, we get a list of words that never appear more than once in any sentence in this group. Given a word from this list and the FSA built from this group, we count the total number of paths that contain this word (C) and the number of paths in which this word appears at least twice (C r , i.e. number of erroneous repetitions). We define the repetition ratio to be C r /C, which is the proportion of \"bad\" paths in this FSA according to this word. If we compute this ratio for all the words in the lists of the first 499 groups 2 and the corresponding FSAs produced by an instantiation of the MSA algorithm 3 , the average repetition ratio is 0.0304992 (14.76% of the words have a non-zero repetition ratio, and the average ratio for these words is 0.206671). In comparison, the average repetition ratio for our algorithm is 0.0035074 (2.16% of the words have a non-zero repetition ratio 4 , and the average ratio for these words is 0.162309). The presence of different constituent orderings does pose a more serious problem to the MSA algorithm.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Word repetition analysis", "sec_num": "4.2.2" }, { "text": "Recently, Papineni et al. (2002) have proposed an automatic MT system evaluation technique (the BLEU score). Given an MT system output and a set of refer-range 0-1 1-2 2-3 3-4 4-5 count 546 256 80 15 2 Table 5 : Statistics for ed gain ence translations, one can estimate the \"goodness\" of the MT output by measuring the n-gram overlap between the output and the reference set. The higher the overlap, i.e., the closer an output string is to a set of reference translations, the better a translation it is. We hypothesize that our FSAs provide a better representation against which the outputs of MT systems can be evaluated because they encode not just a few but thousands of equivalent semantic formulations of the desired meaning. Ideally, if the FSAs we build accept all and only the correct renderings of a given meaning, we can just give a test sentence to the reference FSA and see if it is accepted by it. Since this is not a realistic expectation, we measure the edit distance between a string and an FSA instead: the smaller this distance is, the closer it is to the meaning represented by the FSA.", "cite_spans": [ { "start": 10, "end": 32, "text": "Papineni et al. (2002)", "ref_id": "BIBREF12" } ], "ref_spans": [ { "start": 202, "end": 209, "text": "Table 5", "ref_id": null } ], "eq_spans": [], "section": "MT-based evaluation", "sec_num": "4.2.3" }, { "text": "To assess whether our FSAs are more appropriate representations for evaluating the output of MT systems, we perform the following experiment. For each sentence group, we hold out one sentence as test sentence, and try to evaluate how much of it can be predicted from the other 10 sentences. We compare two different ways of estimating the predictive power. (a) we compute the edit distance between the test sentence and the other 10 sentences in the set. The minimum of this distance is ed(input). (b) we use dynamic programming to efficiently compute the minimum distance (ed(F SA)) between the test sentence and all the paths in the FSA built from the other 10 sentences. The smaller the edit distance is, the better we are predicting a test sentence. Mathematically, the difference between these two measures ed(input) \u2212 ed(F SA) characterizes how much is gained in predictive power by building the FSA.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "MT-based evaluation", "sec_num": "4.2.3" }, { "text": "We carry out the experiment described above in a \"leave-one-out\" fashion (i.e. each sentence serves as a test sentence once). Now let ed gain be the average of ed(input) \u2212 ed(F SA) over the 11 runs for a given group. We compute this for all 899 groups and find the mean for ed gain to be 0.91 (std. dev = 0.78). Table 5 gives the count for groups whose ed gain falls into the specified range. We can see that the majority of ed gain falls under 2.", "cite_spans": [], "ref_spans": [ { "start": 312, "end": 319, "text": "Table 5", "ref_id": null } ], "eq_spans": [], "section": "MT-based evaluation", "sec_num": "4.2.3" }, { "text": "We are also interested in the relation between the predictive power of the FSAs and the number of reference translations they are derived from. For a given group, we randomly order the sentences in it, set the last one as the test sentence, and try to predict it with the first 1, 2, 3, ... 10 sentences. We investigate whether more sentences ed(F SA n ) ed(input n ) \u2212ed(F SA 10 ) \u2212ed(F SA n ) n mean std. dev mean std. dev Table 6 : Effect of monotonically increasing the number of reference sentences yield an increase in the predictive power. Let ed(F SA n ) be the edit distance from the test sentence to the FSA built on the first n sentences; similarly, let ed(input n ) be the minimum edit distance from the test sentence to an input set that consists of only the first n sentences. Table 6 reports the effect of using different number of reference translations. The first column shows that each translation is contributing to the predictive power of our FSA. Even when we add the tenth translation to our FSA, we still improve its predictive power. The second column shows that the more sentences we add to the FSA the larger the difference between its predictive power and that of a simple set. The results in Table 6 suggest that our FSA may be used in order to refine the BLEU metric (Papineni et al., 2002) .", "cite_spans": [ { "start": 1296, "end": 1319, "text": "(Papineni et al., 2002)", "ref_id": "BIBREF12" } ], "ref_spans": [ { "start": 425, "end": 432, "text": "Table 6", "ref_id": null }, { "start": 791, "end": 798, "text": "Table 6", "ref_id": null }, { "start": 1220, "end": 1227, "text": "Table 6", "ref_id": null } ], "eq_spans": [], "section": "MT-based evaluation", "sec_num": "4.2.3" }, { "text": "In this paper, we presented a new syntax-based algorithm that learns paraphrases from a newly available dataset. The multiple translation corpus that we use in this paper is the first instance in a series of similar corpora that are built and made publicly available by LDC in the context of a series of DARPA-sponsored MT evaluations. The algorithm we proposed constructs finite state representations of paraphrases that are useful in many contexts: to induce large lists of lexical and structural paraphrases; to generate semantically equivalent renderings of a given meaning; and to estimate the quality of machine translation systems. More experiments need to be carried out in order to assess extrinsically whether the FSAs we produce can be used to yield higher agreement scores between human and automatic assessments of translation quality.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion & Future Work", "sec_num": "5" }, { "text": "In our future work, we wish to experiment with more flexible merging algorithms and to integrate better the top-down and bottom-up processes that are used to in-duce FSAs. We also wish to extract more abstract paraphrase patterns from the current representation. Such patterns are more likely to get reused -which would help us get reliable statistics for them in the extraction phase, and also have a better chance of being applicable to unseen data.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion & Future Work", "sec_num": "5" }, { "text": "MSA runs very slow for longer sentences, and we believe using the first 499 groups should be enough to make our point.3 We thank Regina Barzilay for providing us this set of results4 Note that FSAs produced right after keyword checking will not yield any non-zero repetition ratio. However, if there are mis-alignment not prevented by keyword checking in an FSA, it may contain paths with erroneous repetition of words after squeezing.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "We thank Hal Daum\u00e9 III, Ulrich Germann, and Ulf Hermjakob for help and discussions; Eric Breck, Hubert Chen, Stephen Chong, Dan Kifer, and Kevin O'Neill for participating in the human evaluation; and the Cornell NLP group and the reviewers for their comments on this paper. We especially want to thank Regina Barzilay and Lillian Lee for many valuable suggestions and help at various stages of this work. Portions of this work were done while the first author was visiting Information Sciences Institute. This work was supported by the Advanced Research and Development Activity (ARDA)'s Advance Question Answering for Intelligence (AQUAINT) Program under contract number MDA908-02-C-0007, the National Science Foundation under ITR/IM grant IIS-0081334 and a Sloan Research Fellowship to Lillian Lee. Any opinions, findings, and conclusions or recommendations expressed above are those of the authors and do not necessarily reflect the views of the National Science Foundation or the Sloan Foundation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Computing consensus translation from multiple machine translation systems", "authors": [ { "first": "Srinivas", "middle": [], "last": "Bangalore", "suffix": "" }, { "first": "German", "middle": [], "last": "Bordel", "suffix": "" }, { "first": "Giuseppe", "middle": [], "last": "Riccardi", "suffix": "" } ], "year": 2001, "venue": "Workshop on Automatic Speech Recognition and Understanding", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Srinivas Bangalore, German Bordel, and Giuseppe Ric- cardi. 2001. Computing consensus translation from multiple machine translation systems. In Workshop on Automatic Speech Recognition and Understanding.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Bootstrapping lexical choice via multiple-sequence alignment", "authors": [ { "first": "Regina", "middle": [], "last": "Barzilay", "suffix": "" }, { "first": "Lillian", "middle": [], "last": "Lee", "suffix": "" } ], "year": 2002, "venue": "Proceedings of the 2002 Conference on Empirical Methods in Natural Language Processing (EMNLP)", "volume": "", "issue": "", "pages": "164--171", "other_ids": {}, "num": null, "urls": [], "raw_text": "Regina Barzilay and Lillian Lee. 2002. Bootstrap- ping lexical choice via multiple-sequence alignment. In Proceedings of the 2002 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 164-171.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Learning to paraphrase: An unsupervised approach using multiplesequence alignment", "authors": [ { "first": "Regina", "middle": [], "last": "Barzilay", "suffix": "" }, { "first": "Lillian", "middle": [], "last": "Lee", "suffix": "" } ], "year": 2003, "venue": "Proceedings of HLT/NAACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Regina Barzilay and Lillian Lee. 2003. Learning to paraphrase: An unsupervised approach using multiple- sequence alignment. In Proceedings of HLT/NAACL.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Extracting paraphrases from a parallel corpus", "authors": [ { "first": "Regina", "middle": [], "last": "Barzilay", "suffix": "" }, { "first": "Kathleen", "middle": [], "last": "Mckeown", "suffix": "" } ], "year": 2001, "venue": "Proceedings of the ACL/EACL", "volume": "", "issue": "", "pages": "50--57", "other_ids": {}, "num": null, "urls": [], "raw_text": "Regina Barzilay and Kathleen McKeown. 2001. Extract- ing paraphrases from a parallel corpus. In Proceedings of the ACL/EACL, pages 50-57.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Information fusion in the context of multi-document summarization", "authors": [ { "first": "Regina", "middle": [], "last": "Barzilay", "suffix": "" }, { "first": "Kathleen", "middle": [], "last": "Mckeown", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Elhadad", "suffix": "" } ], "year": 1999, "venue": "Proceedings of the ACL", "volume": "", "issue": "", "pages": "550--557", "other_ids": {}, "num": null, "urls": [], "raw_text": "Regina Barzilay, Kathleen McKeown, and Michael El- hadad. 1999. Information fusion in the context of multi-document summarization. In Proceedings of the ACL, pages 550-557.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "A maximum-entropy-inspired parser", "authors": [ { "first": "Eugene", "middle": [], "last": "Charniak", "suffix": "" } ], "year": 2000, "venue": "Proceedings of the NAACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Eugene Charniak. 2000. A maximum-entropy-inspired parser. In Proceedings of the NAACL.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "DARPA IAO Machine Translation Workshop", "authors": [ { "first": "", "middle": [], "last": "Darpa", "suffix": "" } ], "year": 2002, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "DARPA. 2002. In DARPA IAO Machine Translation Workshop, Santa Monica, CA, July 22-23.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Natural language based reformulation resource and web exploitation for question answering", "authors": [ { "first": "Ulf", "middle": [], "last": "Hermjakob", "suffix": "" }, { "first": "Abdessamad", "middle": [], "last": "Echihabi", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Marcu", "suffix": "" } ], "year": 2002, "venue": "Proceedings of the Text Retrieval Conference (TREC-2002)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ulf Hermjakob, Abdessamad Echihabi, and Daniel Marcu. 2002. Natural language based reformulation resource and web exploitation for question answer- ing. In Proceedings of the Text Retrieval Conference (TREC-2002). November.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Lexical selection and paraphrase in a meaningtext generation model", "authors": [ { "first": "Lidija", "middle": [], "last": "Iordanskaja", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Kittredge", "suffix": "" }, { "first": "Alain", "middle": [], "last": "Polg\u00e9re", "suffix": "" } ], "year": 1991, "venue": "Natural Language Generation in Artificial Intelligence and Computational Linguistics", "volume": "", "issue": "", "pages": "293--312", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lidija Iordanskaja, Richard Kittredge, and Alain Polg\u00e9re. 1991. Lexical selection and paraphrase in a meaning- text generation model. In C\u00e9cile L. Paris, William R. Swartout, and William C. Mann, editors, Natural Lan- guage Generation in Artificial Intelligence and Com- putational Linguistics, pages 293-312. Kluwer Aca- demic Publisher.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Generation that exploits corpus-based statistical knowledge", "authors": [ { "first": "Irene", "middle": [], "last": "Langkilde", "suffix": "" }, { "first": "Kevin", "middle": [], "last": "Knight", "suffix": "" } ], "year": 1998, "venue": "Proceedings of of ACL/COLING", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Irene Langkilde and Kevin Knight. 1998. Generation that exploits corpus-based statistical knowledge. In Proceedings of of ACL/COLING.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Anticipating the reader's problems and the automatic generation of paraphrases", "authors": [ { "first": "Nils", "middle": [], "last": "Lenke", "suffix": "" } ], "year": 1994, "venue": "Proceedings of the 15th International Conference on Computational Linguistics", "volume": "1", "issue": "", "pages": "319--323", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nils Lenke. 1994. Anticipating the reader's problems and the automatic generation of paraphrases. In Pro- ceedings of the 15th International Conference on Com- putational Linguistics, volume 1, pages 319-323, Ky- oto, Japan, August 5-9.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Discovery of inference rules for question answering", "authors": [ { "first": "Dekang", "middle": [], "last": "Lin", "suffix": "" }, { "first": "Patrick", "middle": [], "last": "Pantel", "suffix": "" } ], "year": 2001, "venue": "Proceedings of ACM SIGKDD Conference on Knowledge Discovery and Data Mining", "volume": "", "issue": "", "pages": "323--328", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dekang Lin and Patrick Pantel. 2001. Discovery of in- ference rules for question answering. In Proceedings of ACM SIGKDD Conference on Knowledge Discov- ery and Data Mining 2001, pages 323-328.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Corpus-based comprehensive and diagnostic MT evaluation: Initial Arabic, Chinese, French, and Spanish results", "authors": [ { "first": "Kishore", "middle": [], "last": "Papineni", "suffix": "" }, { "first": "Salim", "middle": [], "last": "Roukos", "suffix": "" }, { "first": "Todd", "middle": [], "last": "Ward", "suffix": "" }, { "first": "John", "middle": [], "last": "Henderson", "suffix": "" }, { "first": "Florence", "middle": [], "last": "Reeder", "suffix": "" } ], "year": 2002, "venue": "Proceedings of the Human Language Technology Conference", "volume": "", "issue": "", "pages": "124--127", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kishore Papineni, Salim Roukos, Todd Ward, John Hen- derson, and Florence Reeder. 2002. Corpus-based comprehensive and diagnostic MT evaluation: Initial Arabic, Chinese, French, and Spanish results. In Pro- ceedings of the Human Language Technology Confer- ence, pages 124-127, San Diego, CA, March 24-27.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Automatic paraphrase acquisition from news articles", "authors": [ { "first": "Yusuke", "middle": [], "last": "Shinyama", "suffix": "" }, { "first": "Satoshi", "middle": [], "last": "Sekine", "suffix": "" }, { "first": "Kiyoshi", "middle": [], "last": "Sudo", "suffix": "" }, { "first": "Ralph", "middle": [], "last": "Grishman", "suffix": "" } ], "year": 2002, "venue": "Proceedings of the Human Language Technology Conference (HLT-02)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yusuke Shinyama, Satoshi Sekine, Kiyoshi Sudo, and Ralph Grishman. 2002. Automatic paraphrase acqui- sition from news articles. In Proceedings of the Hu- man Language Technology Conference (HLT-02), San Diego, CA, March 24-27. Poster presentation.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Automatic search term variant generation", "authors": [ { "first": "Karen Sparck Jones", "middle": [], "last": "", "suffix": "" }, { "first": "John", "middle": [ "I" ], "last": "Tait", "suffix": "" } ], "year": 1984, "venue": "Journal of Documentation", "volume": "40", "issue": "1", "pages": "50--66", "other_ids": {}, "num": null, "urls": [], "raw_text": "Karen Sparck Jones and John I. Tait. 1984. Automatic search term variant generation. Journal of Documen- tation, 40(1):50-66.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Lexical Semantics and Knowledge Representation in Multilingual Text Generation", "authors": [ { "first": "Manfred", "middle": [], "last": "Stede", "suffix": "" } ], "year": 1999, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Manfred Stede. 1999. Lexical Semantics and Knowledge Representation in Multilingual Text Generation. Kluwer Academic Publishers, Boston/Dordrecht/London.", "links": null } }, "ref_entries": { "FIGREF0": { "uris": null, "num": null, "type_str": "figure", "text": "Sample Sentence Group from the Chinese-English DARPA Evaluation Corpus: 11 English translations of the same Chinese sentence. FSA produced by our syntax-based alignment algorithm from the input inFigure 1. FSA produced by a Multi-Sequence Alignment algorithm from the input inFigure 1." }, "FIGREF1": { "uris": null, "num": null, "type_str": "figure", "text": "Top-down merging of parse trees and FSA extraction." }, "FIGREF2": { "uris": null, "num": null, "type_str": "figure", "text": "effectIn our current implementation, a pair of synonyms can not stop an otherwise legitimate merging, but it's possible to extend our keyword checking process with the help of lexical resources such as WordNet in future work." }, "TABREF0": { "html": null, "num": null, "content": "", "type_str": "table", "text": "1. At least 12 people were killed in the battle last week. 2. At least 12 people lost their lives in last week's fighting. 3. Last week's fight took at least 12 lives. 4. The fighting last week killed at least 12. 5. The battle of last week killed at least 12 persons. 6. At least 12 persons died in the fighting last week. 7. At least 12 died in the battle last week. 8. At least 12 people were killed in the fighting last week. 9. During last week's fighting, at least 12 people died. 10. Last week at least twelve people died in the fighting. 11. Last week's fighting took the lives of twelve people." }, "TABREF5": { "html": null, "num": null, "content": "
: Statistics on Number of Paths in FSAs
random variablemeanstd. dev
ent(F SA) \u2212 ent(SG) \u22120.11586 1.25162
ent(M T S) \u2212 ent(SG)
", "type_str": "table", "text": "" } } } }