{ "paper_id": "P03-1012", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T09:13:44.521749Z" }, "title": "A Probability Model to Improve Word Alignment", "authors": [ { "first": "Colin", "middle": [], "last": "Cherry", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Alberta Edmonton", "location": { "postCode": "T6G 2E8", "settlement": "Alberta", "country": "Canada" } }, "email": "colinc@cs.ualberta.ca" }, { "first": "Dekang", "middle": [], "last": "Lin", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Alberta Edmonton", "location": { "postCode": "T6G 2E8", "settlement": "Alberta", "country": "Canada" } }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Word alignment plays a crucial role in statistical machine translation. Word-aligned corpora have been found to be an excellent source of translation-related knowledge. We present a statistical model for computing the probability of an alignment given a sentence pair. This model allows easy integration of context-specific features. Our experiments show that this model can be an effective tool for improving an existing word alignment.", "pdf_parse": { "paper_id": "P03-1012", "_pdf_hash": "", "abstract": [ { "text": "Word alignment plays a crucial role in statistical machine translation. Word-aligned corpora have been found to be an excellent source of translation-related knowledge. We present a statistical model for computing the probability of an alignment given a sentence pair. This model allows easy integration of context-specific features. Our experiments show that this model can be an effective tool for improving an existing word alignment.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Word alignments were first introduced as an intermediate result of statistical machine translation systems (Brown et al., 1993) . Since their introduction, many researchers have become interested in word alignments as a knowledge source. For example, alignments can be used to learn translation lexicons (Melamed, 1996) , transfer rules (Carbonell et al., 2002; Menezes and Richardson, 2001) , and classifiers to find safe sentence segmentation points (Berger et al., 1996) .", "cite_spans": [ { "start": 107, "end": 127, "text": "(Brown et al., 1993)", "ref_id": "BIBREF2" }, { "start": 304, "end": 319, "text": "(Melamed, 1996)", "ref_id": "BIBREF10" }, { "start": 337, "end": 361, "text": "(Carbonell et al., 2002;", "ref_id": "BIBREF3" }, { "start": 362, "end": 391, "text": "Menezes and Richardson, 2001)", "ref_id": "BIBREF13" }, { "start": 452, "end": 473, "text": "(Berger et al., 1996)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In addition to the IBM models, researchers have proposed a number of alternative alignment methods. These methods often involve using a statistic such as \u03c6 2 (Gale and Church, 1991) or the log likelihood ratio (Dunning, 1993) to create a score to measure the strength of correlation between source and target words. Such measures can then be used to guide a constrained search to produce word alignments (Melamed, 2000) .", "cite_spans": [ { "start": 158, "end": 181, "text": "(Gale and Church, 1991)", "ref_id": "BIBREF6" }, { "start": 210, "end": 225, "text": "(Dunning, 1993)", "ref_id": "BIBREF4" }, { "start": 404, "end": 419, "text": "(Melamed, 2000)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "It has been shown that once a baseline alignment has been created, one can improve results by using a refined scoring metric that is based on the alignment. For example Melamed uses competitive linking along with an explicit noise model in (Melamed, 2000) to produce a new scoring metric, which in turn creates better alignments.", "cite_spans": [ { "start": 240, "end": 255, "text": "(Melamed, 2000)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this paper, we present a simple, flexible, statistical model that is designed to capture the information present in a baseline alignment. This model allows us to compute the probability of an alignment for a given sentence pair. It also allows for the easy incorporation of context-specific knowledge into alignment probabilities.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "A critical reader may pose the question, \"Why invent a new statistical model for this purpose, when existing, proven models are available to train on a given word alignment?\" We will demonstrate experimentally that, for the purposes of refinement, our model achieves better results than a comparable existing alternative.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We will first present this model in its most general form. Next, we describe an alignment algorithm that integrates this model with linguistic constraints in order to produce high quality word alignments. We will follow with our experimental results and discussion. We will close with a look at how our work relates to other similar systems and a discussion of possible future directions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this section we describe our probability model. To do so, we will first introduce some necessary notation. Let E be an English sentence e 1 , e 2 , . . . , e m and let F be a French sentence f 1 , f 2 , . . . , f n . We define a link l(e i , f j ) to exist if e i and f j are a translation (or part of a translation) of one another. We define the null link l(e i , f 0 ) to exist if e i does not correspond to a translation for any French word in F . The null link l(e 0 , f j ) is defined similarly. An alignment A for two sentences E and F is a set of links such that every word in E and F participates in at least one link, and a word linked to e 0 or f 0 participates in no other links. If e occurs in E x times and f occurs in F y times, we say that e and f co-occur xy times in this sentence pair.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Probability Model", "sec_num": "2" }, { "text": "We define the alignment problem as finding the alignment A that maximizes P (A|E, F ). This corresponds to finding the Viterbi alignment in the IBM translation systems. Those systems model P (F, A|E), which when maximized is equivalent to maximizing P (A|E, F ). We propose here a system which models P (A|E, F ) directly, using a different decomposition of terms.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Probability Model", "sec_num": "2" }, { "text": "In the IBM models of translation, alignments exist as artifacts of which English words generated which French words. Our model does not state that one sentence generates the other. Instead it takes both sentences as given, and uses the sentences to determine an alignment. An alignment A consists of t links {l 1 , l 2 , . . . , l t }, where each l k = l(e i k , f j k ) for some i k and j k . We will refer to consecutive subsets of A as l j i = {l i , l i+1 , . . . , l j }. Given this notation, P (A|E, F ) can be decomposed as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Probability Model", "sec_num": "2" }, { "text": "P (A|E, F ) = P (l t 1 |E, F ) = t k=1 P (l k |E, F, l k\u22121 1 )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Probability Model", "sec_num": "2" }, { "text": "At this point, we must factor P (l k |E, F, l k\u22121 1 ) to make computation feasible. Let C k = {E, F, l k\u22121 1 } represent the context of l k . Note that both the context C k and the link l k imply the occurrence of e i k and f j k . We can rewrite P (l k |C k ) as:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Probability Model", "sec_num": "2" }, { "text": "P (l k |C k ) = P (l k , C k ) P (C k ) = P (C k |l k )P (l k ) P (C k , e i k , f j k ) = P (C k |l k ) P (C k |e i k , f j k ) \u00d7 P (l k , e i k , f j k ) P (e i k , f j k ) = P (l k |e i k , f j k ) \u00d7 P (C k |l k ) P (C k |e i k , f j k ) Here P (l k |e i k , f j k )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Probability Model", "sec_num": "2" }, { "text": "is link probability given a cooccurrence of the two words, which is similar in spirit to Melamed's explicit noise model (Melamed, 2000) . This term depends only on the words involved directly in the link. The ratio P (C k |l k )", "cite_spans": [ { "start": 120, "end": 135, "text": "(Melamed, 2000)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Probability Model", "sec_num": "2" }, { "text": "P (C k |e i k ,f j k )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Probability Model", "sec_num": "2" }, { "text": "modifies the link probability, providing contextsensitive information.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Probability Model", "sec_num": "2" }, { "text": "Up until this point, we have made no simplifying assumptions in our derivation. Unfortunately,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Probability Model", "sec_num": "2" }, { "text": "C k = {E, F, l k\u22121", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Probability Model", "sec_num": "2" }, { "text": "1 } is too complex to estimate context probabilities directly. Suppose F T k is a set of context-related features such that P (l k |C k ) can be approximated by", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Probability Model", "sec_num": "2" }, { "text": "P (l k |e i k , f j k , F T k ). Let C k = {e i k , f j k }\u222aF T k . P (l k |C k )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Probability Model", "sec_num": "2" }, { "text": "can then be decomposed using the same derivation as above.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Probability Model", "sec_num": "2" }, { "text": "P (l k |C k ) = P (l k |e i k , f j k ) \u00d7 P (C k |l k ) P (C k |e i k , f j k ) = P (l k |e i k , f j k ) \u00d7 P (F T k |l k ) P (F T k |e i k , f j k )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Probability Model", "sec_num": "2" }, { "text": "In the second line of this derivation, we can drop e i k and f j k from C k , leaving only F T k , because they are implied by the events which the probabilities are conditionalized on. Now, we are left with the task of approximating P (F T k |l k ) and P (F T k |e i k , f j k ). To do so, we will assume that for all f t \u2208 F T k , f t is conditionally independent given either l k or (e i k , f j k ). This allows us to approximate alignment probability P (A|E, F ) as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Probability Model", "sec_num": "2" }, { "text": "t k=1 \uf8eb \uf8ed P (l k |e i k , f j k ) \u00d7 f t\u2208F T k P (f t|l k ) P (f t|e i k , f j k ) \uf8f6 \uf8f8", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Probability Model", "sec_num": "2" }, { "text": "In any context, only a few features will be active. The inner product is understood to be only over those features f t that are present in the current context. This approximation will cause P (A|E, F ) to no longer be a well-behaved probability distribution, though as in Naive Bayes, it can be an excellent estimator for the purpose of ranking alignments.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Probability Model", "sec_num": "2" }, { "text": "If we have an aligned training corpus, the probabilities needed for the above equation are quite easy to obtain. Link probabilities can be determined directly from |l k | (link counts) and |e i k , f j,k | (co-occurrence counts). For any co-occurring pair of words (e i k , f j k ), we check whether it has the feature f t. If it does, we increment the count of |f t, e i k , f j k |. If this pair is also linked, then we increment the count of |f t, l k |. Note that our definition of F T k allows for features that depend on previous links. For this reason, when determining whether or not a feature is present in a given context, one must impose an ordering on the links. This ordering can be arbitrary as long as the same ordering is used in training 1 and probability evaluation. A simple solution would be to order links according their French words. We choose to order links according to the link probability P (l k |e i k , f j k ) as it has an intuitive appeal of allowing more certain links to provide context for others.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Probability Model", "sec_num": "2" }, { "text": "We store probabilities in two tables. The first table stores link probabilities P (l k |e i k , f j k ). It has an entry for every word pair that was linked at least once in the training corpus. Its size is the same as the translation table in the IBM models. The second table stores feature probabilities, P (f t|l k ) and P (f t|e i k , f j k ). For every linked word pair, this table has two entries for each active feature. In the worst case this table will be of size 2\u00d7|F T |\u00d7|E|\u00d7|F |. In practice, it is much smaller as most contexts activate only a small number of features.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Probability Model", "sec_num": "2" }, { "text": "In the next subsection we will walk through a simple example of this probability model in action. We will describe the features used in our implementation of this model in Section 3.2. Figure 1 shows an aligned corpus consisting of one sentence pair. Suppose that we are concerned with only one feature f t that is active 2 for e i k and f j k if an adjacent pair is an alignment, i.e.,", "cite_spans": [], "ref_spans": [ { "start": 185, "end": 193, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Probability Model", "sec_num": "2" }, { "text": "l(e i k \u22121 , f j k \u22121 ) \u2208 l k\u22121 1 or l(e i k +1 , f j k +1 ) \u2208 l k\u22121", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "An Illustrative Example", "sec_num": "2.1" }, { "text": "1 . This example would produce the probability tables shown in Table 1 .", "cite_spans": [], "ref_spans": [ { "start": 63, "end": 70, "text": "Table 1", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "An Illustrative Example", "sec_num": "2.1" }, { "text": "Note how f t is active for the (a, v) link, and is not active for the (b, u) link. This is due to our selected ordering. Table 1 allows ", "cite_spans": [], "ref_spans": [ { "start": 121, "end": 135, "text": "Table 1 allows", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "An Illustrative Example", "sec_num": "2.1" }, { "text": "e i k f j k |l k | |e i k , f j k | P (l k |e i k , f j k ) b u 1 1 1 a f 0 1 2 1 2 e 0 v 1 2 1 2 a v 1 4 1 4 (b) Feature Counts e i k f j k |f t, l k | |f t, e i k , f j k | a v 1 1 (c) Feature Probabilities e i k f j k P (f t|l k ) P (f t|e i k , f j k ) a v 1 1 4 P (A|E, F ) = P (l(b, u)|b, u)\u00d7 P (l(a, f 0 )|a, f 0 )\u00d7 P (l(e 0 , v)|e 0 , v)\u00d7 P (l(a, v)|a, v) P (f t|l(a,v)) P (f t|a,v) = 1 \u00d7 1 2 \u00d7 1 2 \u00d7 1 4 \u00d7 1 1 4 = 1 4 3 Word-Alignment Algorithm", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "An Illustrative Example", "sec_num": "2.1" }, { "text": "In this section, we describe a world-alignment algorithm guided by the alignment probability model derived above. In designing this algorithm we have selected constraints, features and a search method in order to achieve high performance. The model, however, is general, and could be used with any instantiation of the above three factors. This section will describe and motivate the selection of our constraints, features and search method. The input to our word-alignment algorithm consists of a pair of sentences E and F , and the dependency tree T E for E. T E allows us to make use of features and constraints that are based on linguistic intuitions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "An Illustrative Example", "sec_num": "2.1" }, { "text": "The reader will note that our alignment model as described above has very few factors to prevent undesirable alignments, such as having all French words align to the same English word. To guide the model to correct alignments, we employ two constraints to limit our search for the most probable alignment. The first constraint is the one-to-one constraint (Melamed, 2000) : every word (except the null words e 0 and f 0 ) participates in exactly one link.", "cite_spans": [ { "start": 356, "end": 371, "text": "(Melamed, 2000)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Constraints", "sec_num": "3.1" }, { "text": "The second constraint, known as the cohesion constraint (Fox, 2002) , uses the dependency tree (Mel'\u010duk, 1987) of the English sentence to restrict possible link combinations. Given the dependency tree T E , the alignment can induce a dependency tree for F . The cohesion constraint requires that this induced dependency tree does not have any crossing dependencies. The details about how the cohesion constraint is implemented are outside the scope of this paper. 3 Here we will use a simple example to illustrate the effect of the constraint. Consider the partial alignment in Figure 2 . When the system attempts to link of and de, the new link will induce the dotted dependency, which crosses a previously induced dependency between service and donn\u00e9es. Therefore, of and de will not be linked. ", "cite_spans": [ { "start": 56, "end": 67, "text": "(Fox, 2002)", "ref_id": "BIBREF5" }, { "start": 95, "end": 110, "text": "(Mel'\u010duk, 1987)", "ref_id": "BIBREF12" } ], "ref_spans": [ { "start": 578, "end": 586, "text": "Figure 2", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "Constraints", "sec_num": "3.1" }, { "text": "In this section we introduce two types of features that we use in our implementation of the probability model described in Section 2. The first feature 3 The algorithm for checking the cohesion constraint is presented in a separate paper which is currently under review. Figure 3 : Feature Extraction Example type f t a concerns surrounding links. It has been observed that words close to each other in the source language tend to remain close to each other in the translation (Vogel et al., 1996; Ker and Change, 1997) . To capture this notion, for any word pair (e i , f j ), if a link l(e i , f j ) exists where i \u2212 2 \u2264 i \u2264 i + 2 and j \u2212 2 \u2264 j \u2264 j + 2, then we say that the feature f t a (i\u2212i , j \u2212j , e i ) is active for this context. We refer to these as adjacency features.", "cite_spans": [ { "start": 152, "end": 153, "text": "3", "ref_id": null }, { "start": 477, "end": 497, "text": "(Vogel et al., 1996;", "ref_id": "BIBREF15" }, { "start": 498, "end": 519, "text": "Ker and Change, 1997)", "ref_id": "BIBREF8" } ], "ref_spans": [ { "start": 271, "end": 279, "text": "Figure 3", "ref_id": null } ], "eq_spans": [], "section": "Features", "sec_num": "3.2" }, { "text": "The second feature type f t d uses the English parse tree to capture regularities among grammatical relations between languages. For example, when dealing with French and English, the location of the determiner with respect to its governor 4 is never swapped during translation, while the location of adjectives is swapped frequently. For any word pair (e i , f j ), let e i be the governor of e i , and let rel be the relationship between them. If a link l(e i , f j ) exists, then we say that the feature f t d (j \u2212 j , rel) is active for this context. We refer to these as dependency features.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Features", "sec_num": "3.2" }, { "text": "Take for example Figure 3 which shows a partial alignment with all links completed except for those involving 'the'. Given this sentence pair and English parse tree, we can extract features of both types to assist in the alignment of the 1 . The word pair (the 1 , l ) will have an active adjacency feature f t a (+1, +1, host) as well as a dependency feature f t d (\u22121, det). These two features will work together to increase the probability of this correct link. In contrast, the incorrect link (the 1 , les) will have only f t d (+3, det), which will work to lower the link probability, since most determiners are located be-fore their governors.", "cite_spans": [], "ref_spans": [ { "start": 17, "end": 25, "text": "Figure 3", "ref_id": null } ], "eq_spans": [], "section": "Features", "sec_num": "3.2" }, { "text": "Due to our use of constraints, when seeking the highest probability alignment, we cannot rely on a method such as dynamic programming to (implicitly) search the entire alignment space. Instead, we use a best-first search algorithm (with constant beam and agenda size) to search our constrained space of possible alignments. A state in this space is a partial alignment. A transition is defined as the addition of a single link to the current state. Any link which would create a state that does not violate any constraint is considered to be a valid transition. Our start state is the empty alignment, where all words in E and F are linked to null. A terminal state is a state in which no more links can be added without violating a constraint. Our goal is to find the terminal state with highest probability.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Search", "sec_num": "3.3" }, { "text": "For the purposes of our best-first search, nonterminal states are evaluated according to a greedy completion of the partial alignment. We build this completion by adding valid links in the order of their unmodified link probabilities P (l|e, f ) until no more links can be added. The score the state receives is the probability of its greedy completion. These completions are saved for later use (see Section 4.2).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Search", "sec_num": "3.3" }, { "text": "As was stated in Section 2, our probability model needs an initial alignment in order to create its probability tables. Furthermore, to avoid having our model learn mistakes and noise, it helps to train on a set of possible alignments for each sentence, rather than one Viterbi alignment. In the following subsections we describe the creation of the initial alignments used for our experiments, as well as our sampling method used in training.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training", "sec_num": "4" }, { "text": "We produce an initial alignment using the same algorithm described in Section 3, except we maximize summed \u03c6 2 link scores (Gale and Church, 1991) , rather than alignment probability. This produces a reasonable one-to-one word alignment that we can refine using our probability model.", "cite_spans": [ { "start": 123, "end": 146, "text": "(Gale and Church, 1991)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Initial Alignment", "sec_num": "4.1" }, { "text": "Our use of the one-to-one constraint and the cohesion constraint precludes sampling directly from all possible alignments. These constraints tie words in such a way that the space of alignments cannot be enumerated as in IBM models 1 and 2 (Brown et al., 1993) . Taking our lead from IBM models 3, 4 and 5, we will sample from the space of those highprobability alignments that do not violate our constraints, and then redistribute our probability mass among our sample.", "cite_spans": [ { "start": 240, "end": 260, "text": "(Brown et al., 1993)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Alignment Sampling", "sec_num": "4.2" }, { "text": "At each search state in our alignment algorithm, we consider a number of potential links, and select between them using a heuristic completion of the resulting state. Our sample S of possible alignments will be the most probable alignment, plus the greedy completions of the states visited during search. It is important to note that any sampling method that concentrates on complete, valid and high probability alignments will accomplish the same task.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Alignment Sampling", "sec_num": "4.2" }, { "text": "When collecting the statistics needed to calculate P (A|E, F ) from our initial \u03c6 2 alignment, we give each s \u2208 S a uniform weight. This is reasonable, as we have no probability estimates at this point. When training from the alignments produced by our model, we normalize P (s|E, F ) so that s\u2208S P (s|E, F ) = 1. We then count links and features in S according to these normalized probabilities.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Alignment Sampling", "sec_num": "4.2" }, { "text": "We adopted the same evaluation methodology as in (Och and Ney, 2000) , which compared alignment outputs with manually aligned sentences. Och and Ney classify manual alignments into two categories: Sure (S) and Possible (P ) (S\u2286P ). They defined the following metrics to evaluate an alignment A:", "cite_spans": [ { "start": 49, "end": 68, "text": "(Och and Ney, 2000)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Experimental Results", "sec_num": "5" }, { "text": "recall = |A\u2229S| |S| precision = |A\u2229P | |P | alignment error rate (AER) = |A\u2229S|+|A\u2229P | |S|+|P |", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental Results", "sec_num": "5" }, { "text": "We trained our alignment program with the same 50K pairs of sentences as (Och and Ney, 2000) and tested it on the same 500 manually aligned sentences. Both the training and testing sentences are from the Hansard corpus. We parsed the training and testing corpora with Minipar. 5 We then ran the training procedure in Section 4 for three iterations. We conducted three experiments using this methodology. The goal of the first experiment is to compare the algorithm in Section 3 to a state-of-theart alignment system. The second will determine the contributions of the features. The third experiment aims to keep all factors constant except for the model, in an attempt to determine its performance when compared to an obvious alternative. Table 2 compares the results of our algorithm with the results in (Och and Ney, 2000) , where an HMM model is used to bootstrap IBM Model 4. The rows IBM-4 F\u2192E and IBM-4 E\u2192F are the results obtained by IBM Model 4 when treating French as the source and English as the target or vice versa. The row IBM-4 Intersect shows the results obtained by taking the intersection of the alignments produced by IBM-4 E\u2192F and IBM-4 F\u2192E. The row IBM-4 Refined shows results obtained by refining the intersection of alignments in order to increase recall.", "cite_spans": [ { "start": 73, "end": 92, "text": "(Och and Ney, 2000)", "ref_id": "BIBREF14" }, { "start": 277, "end": 278, "text": "5", "ref_id": null }, { "start": 805, "end": 824, "text": "(Och and Ney, 2000)", "ref_id": "BIBREF14" } ], "ref_spans": [ { "start": 739, "end": 746, "text": "Table 2", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Experimental Results", "sec_num": "5" }, { "text": "Our algorithm achieved over 44% relative error reduction when compared with IBM-4 used in either direction and a 25% relative error rate reduction when compared with IBM-4 Refined. It also achieved a slight relative error reduction when compared with IBM-4 Intersect. This demonstrates that we are competitive with the methods described in (Och and Ney, 2000) . In Table 2 , one can see that our algorithm is high precision, low recall. This was expected as our algorithm uses the one-to-one constraint, which rules out many of the possible alignments present in the evaluation data.", "cite_spans": [ { "start": 340, "end": 359, "text": "(Och and Ney, 2000)", "ref_id": "BIBREF14" } ], "ref_spans": [ { "start": 365, "end": 372, "text": "Table 2", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Comparison to state-of-the-art", "sec_num": "5.1" }, { "text": "5 available at http://www.cs.ualberta.ca/\u02dclindek/minipar.htm ) 88.9 84.6 13.1 without features 93.7 84.8 10.5 with f t d only 95.6 85.4 9.3 with f t a only 95.9 85.8 9.0 with f t a and f t d 95.7 86.4 8.7 Table 3 shows the contributions of features to our algorithm's performance. The initial (\u03c6 2 ) row is the score for the algorithm (described in Section 4.1) that generates our initial alignment. The without features row shows the score after 3 iterations of refinement with an empty feature set. Here we can see that our model in its simplest form is capable of producing a significant improvement in alignment quality. The rows with f t d only and with f t a only describe the scores after 3 iterations of training using only dependency and adjacency features respectively. The two features provide significant contributions, with the adjacency feature being slightly more important. The final row shows that both features can work together to create a greater improvement, despite the independence assumptions made in Section 2.", "cite_spans": [], "ref_spans": [ { "start": 61, "end": 62, "text": ")", "ref_id": null }, { "start": 205, "end": 212, "text": "Table 3", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "Comparison to state-of-the-art", "sec_num": "5.1" }, { "text": "Even though we have compared our algorithm to alignments created using IBM statistical models, it is not clear if our model is essential to our performance. This experiment aims to determine if we could have achieved similar results using the same initial alignment and search algorithm with an alternative model. Without using any features, our model is similar to IBM's Model 1, in that they both take into account only the word types that participate in a given link. IBM Model 1 uses P (f |e), the probability of f being generated by e, while our model uses P (l|e, f ), the probability of a link existing between e and f . In this experiment, we set Model 1 translation probabilities according to our initial \u03c6 2 alignment, sampling as we described in Section 4.2. We then use the n j=1 P (f j |e a j ) to evaluate candidate alignments in a search that is otherwise identical to our algorithm. We ran Model 1 refinement for three iterations and Table 4 : P (l|e, f ) vs. P (f |e)", "cite_spans": [], "ref_spans": [ { "start": 950, "end": 957, "text": "Table 4", "ref_id": null } ], "eq_spans": [], "section": "Model Evaluation", "sec_num": "5.3" }, { "text": "Prec Rec AER initial (\u03c6 2 ) 88.9 84.6 13.1 P (l|e, f ) model 93.7 84.8 10.5 P (f |e) model 89.2 83.0 13.7", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Algorithm", "sec_num": null }, { "text": "recorded the best results that it achieved. It is clear from Table 4 that refining our initial \u03c6 2 alignment using IBM's Model 1 is less effective than using our model in the same manner. In fact, the Model 1 refinement receives a lower score than our initial alignment.", "cite_spans": [], "ref_spans": [ { "start": 61, "end": 68, "text": "Table 4", "ref_id": null } ], "eq_spans": [], "section": "Algorithm", "sec_num": null }, { "text": "When viewed with no features, our probability model is most similar to the explicit noise model defined in (Melamed, 2000) . In fact, Melamed defines a probability distribution P (links(u, v)|cooc(u, v), \u03bb + , \u03bb \u2212 ) which appears to make our work redundant. However, this distribution refers to the probability that two word types u and v are linked links(u, v) times in the entire corpus. Our distribution P (l|e, f ) refers to the probability of linking a specific co-occurrence of the word tokens e and f . In Melamed's work, these probabilities are used to compute a score based on a probability ratio. In our work, we use the probabilities directly.", "cite_spans": [ { "start": 107, "end": 122, "text": "(Melamed, 2000)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Probability models", "sec_num": "6.1" }, { "text": "By far the most prominent probability models in machine translation are the IBM models and their extensions. When trying to determine whether two words are aligned, the IBM models ask, \"What is the probability that this English word generated this French word?\" Our model asks instead, \"If we are given this English word and this French word, what is the probability that they are linked?\" The distinction is subtle, yet important, introducing many differences. For example, in our model, E and F are symmetrical. Furthermore, we model P (l|e, f ) and P (l|e, f ) as unrelated values, whereas the IBM model would associate them in the translation probabilities t(f |e) and t(f |e) through the constraint f t(f |e) = 1. Unfortunately, by conditionalizing on both words, we eliminate a large inductive bias. This prevents us from starting with uniform probabilities and estimating parameters with EM. This is why we must supply the model with a noisy initial alignment, while IBM can start from an unaligned corpus.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Probability models", "sec_num": "6.1" }, { "text": "In the IBM framework, when one needs the model to take new information into account, one must create an extended model which can base its parameters on the previous model. In our model, new information can be incorporated modularly by adding features. This makes our work similar to maximum entropy-based machine translation methods, which also employ modular features. Maximum entropy can be used to improve IBM-style translation probabilities by using features, such as improvements to P (f |e) in (Berger et al., 1996) . By the same token we can use maximum entropy to improve our estimates of P (l k |e i k , f j k , C k ). We are currently investigating maximum entropy as an alternative to our current feature model which assumes conditional independence among features.", "cite_spans": [ { "start": 500, "end": 521, "text": "(Berger et al., 1996)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Probability models", "sec_num": "6.1" }, { "text": "There have been many recent proposals to leverage syntactic data in word alignment. Methods such as (Wu, 1997) , (Alshawi et al., 2000) and (Lopez et al., 2002 ) employ a synchronous parsing procedure to constrain a statistical alignment. The work done in (Yamada and Knight, 2001 ) measures statistics on operations that transform a parse tree from one language into another.", "cite_spans": [ { "start": 100, "end": 110, "text": "(Wu, 1997)", "ref_id": "BIBREF16" }, { "start": 113, "end": 135, "text": "(Alshawi et al., 2000)", "ref_id": "BIBREF0" }, { "start": 140, "end": 159, "text": "(Lopez et al., 2002", "ref_id": "BIBREF9" }, { "start": 256, "end": 280, "text": "(Yamada and Knight, 2001", "ref_id": "BIBREF17" } ], "ref_spans": [], "eq_spans": [], "section": "Grammatical Constraints", "sec_num": "6.2" }, { "text": "The alignment algorithm described here is incapable of creating alignments that are not one-to-one. The model we describe, however is not limited in the same manner. The model is currently capable of creating many-to-one alignments so long as the null probabilities of the words added on the \"many\" side are less than the probabilities of the links that would be created. Under the current implementation, the training corpus is one-to-one, which gives our model no opportunity to learn many-to-one alignments.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Future Work", "sec_num": "7" }, { "text": "We are pursuing methods to create an extended algorithm that can handle many-to-one alignments. This would involve training from an initial alignment that allows for many-to-one links, such as one of the IBM models. Features that are related to multiple links should be added to our set of feature types, to guide intelligent placement of such links.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Future Work", "sec_num": "7" }, { "text": "We have presented a simple, flexible, statistical model for computing the probability of an alignment given a sentence pair. This model allows easy integration of context-specific features. Our experiments show that this model can be an effective tool for improving an existing word alignment.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "8" }, { "text": "In our experiments, the ordering is not necessary during training to achieve good performance.2 Throughout this paper we will assume that null alignments are special cases, and do not activate or participate in features unless otherwise stated in the feature description.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "The parent node in the dependency tree.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Learning dependency translation models as collections of finite state head transducers. Computational Linguistics", "authors": [ { "first": "Hiyan", "middle": [], "last": "Alshawi", "suffix": "" }, { "first": "Srinivas", "middle": [], "last": "Bangalore", "suffix": "" }, { "first": "Shona", "middle": [], "last": "Douglas", "suffix": "" } ], "year": 2000, "venue": "", "volume": "26", "issue": "", "pages": "45--60", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hiyan Alshawi, Srinivas Bangalore, and Shona Douglas. 2000. Learning dependency translation models as col- lections of finite state head transducers. Computa- tional Linguistics, 26(1):45-60.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "A maximum entropy approach to natural language processing", "authors": [ { "first": "Adam", "middle": [ "L" ], "last": "Berger", "suffix": "" }, { "first": "Stephen", "middle": [ "A" ], "last": "Della Pietra", "suffix": "" }, { "first": "Vincent", "middle": [ "J Della" ], "last": "Pietra", "suffix": "" } ], "year": 1996, "venue": "Computational Linguistics", "volume": "22", "issue": "1", "pages": "39--71", "other_ids": {}, "num": null, "urls": [], "raw_text": "Adam L. Berger, Stephen A. Della Pietra, and Vincent J. Della Pietra. 1996. A maximum entropy approach to natural language processing. Computational Linguis- tics, 22(1):39-71.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "The mathematics of statistical machine translation: Parameter estimation", "authors": [ { "first": "P", "middle": [ "F" ], "last": "Brown", "suffix": "" }, { "first": "V", "middle": [ "S A" ], "last": "Della Pietra", "suffix": "" }, { "first": "V", "middle": [ "J" ], "last": "Della Pietra", "suffix": "" }, { "first": "R", "middle": [ "L" ], "last": "Mercer", "suffix": "" } ], "year": 1993, "venue": "Computational Linguistics", "volume": "19", "issue": "2", "pages": "263--312", "other_ids": {}, "num": null, "urls": [], "raw_text": "P. F. Brown, V. S. A. Della Pietra, V. J. Della Pietra, and R. L. Mercer. 1993. The mathematics of statistical machine translation: Parameter estimation. Computa- tional Linguistics, 19(2):263-312.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Automatic rule learning for resource-limited mt", "authors": [ { "first": "Jaime", "middle": [], "last": "Carbonell", "suffix": "" }, { "first": "Katharina", "middle": [], "last": "Probst", "suffix": "" }, { "first": "Erik", "middle": [], "last": "Peterson", "suffix": "" }, { "first": "Christian", "middle": [], "last": "Monson", "suffix": "" }, { "first": "Alon", "middle": [], "last": "Lavie", "suffix": "" }, { "first": "Ralf", "middle": [], "last": "Brown", "suffix": "" }, { "first": "Lori", "middle": [], "last": "Levin", "suffix": "" } ], "year": 2002, "venue": "Proceedings of AMTA-02", "volume": "", "issue": "", "pages": "1--10", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jaime Carbonell, Katharina Probst, Erik Peterson, Chris- tian Monson, Alon Lavie, Ralf Brown, and Lori Levin. 2002. Automatic rule learning for resource-limited mt. In Proceedings of AMTA-02, pages 1-10.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Accurate methods for the statistics of surprise and coincidence", "authors": [ { "first": "Ted", "middle": [], "last": "Dunning", "suffix": "" } ], "year": 1993, "venue": "Computational Linguistics", "volume": "19", "issue": "1", "pages": "61--74", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ted Dunning. 1993. Accurate methods for the statistics of surprise and coincidence. Computational Linguis- tics, 19(1):61-74, March.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Phrasal cohesion and statistical machine translation", "authors": [ { "first": "Heidi", "middle": [ "J" ], "last": "Fox", "suffix": "" } ], "year": 2002, "venue": "Proceedings of EMNLP-02", "volume": "", "issue": "", "pages": "304--311", "other_ids": {}, "num": null, "urls": [], "raw_text": "Heidi J. Fox. 2002. Phrasal cohesion and statistical machine translation. In Proceedings of EMNLP-02, pages 304-311.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Identifying word correspondences in parallel texts", "authors": [ { "first": "W", "middle": [ "A" ], "last": "Gale", "suffix": "" }, { "first": "K", "middle": [ "W" ], "last": "Church", "suffix": "" }, { "first": ";", "middle": [], "last": "Darpa", "suffix": "" }, { "first": "Morgan", "middle": [], "last": "Kaufmann", "suffix": "" } ], "year": 1991, "venue": "Proceedings of the 4th Speech and Natural Language Workshop", "volume": "", "issue": "", "pages": "152--157", "other_ids": {}, "num": null, "urls": [], "raw_text": "W.A. Gale and K.W. Church. 1991. Identifying word correspondences in parallel texts. In Proceedings of the 4th Speech and Natural Language Workshop, pages 152-157. DARPA, Morgan Kaufmann.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Evaluating translational correspondence using annotation projection", "authors": [ { "first": "Rebecca", "middle": [], "last": "Hwa", "suffix": "" }, { "first": "Philip", "middle": [], "last": "Resnik", "suffix": "" }, { "first": "Amy", "middle": [], "last": "Weinberg", "suffix": "" }, { "first": "Okan", "middle": [], "last": "Kolak", "suffix": "" } ], "year": 2002, "venue": "Proceeding of ACL-02", "volume": "", "issue": "", "pages": "392--399", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rebecca Hwa, Philip Resnik, Amy Weinberg, and Okan Kolak. 2002. Evaluating translational correspondence using annotation projection. In Proceeding of ACL-02, pages 392-399.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Aligning more words with high precision for small bilingual corpora", "authors": [ { "first": "J", "middle": [], "last": "Sue", "suffix": "" }, { "first": "Jason", "middle": [ "S" ], "last": "Ker", "suffix": "" }, { "first": "", "middle": [], "last": "Change", "suffix": "" } ], "year": 1997, "venue": "Computational Linguistics and Chinese Language Processing", "volume": "2", "issue": "", "pages": "63--96", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sue J. Ker and Jason S. Change. 1997. Aligning more words with high precision for small bilingual cor- pora. Computational Linguistics and Chinese Lan- guage Processing, 2(2):63-96, August.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Word-level alignment for multilingual resource acquisition", "authors": [ { "first": "Adam", "middle": [], "last": "Lopez", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Nossal", "suffix": "" }, { "first": "Rebecca", "middle": [], "last": "Hwa", "suffix": "" }, { "first": "Philip", "middle": [], "last": "Resnik", "suffix": "" } ], "year": 2002, "venue": "Proceedings of the Workshop on Linguistic Knowledge Acquisition and Representation: Bootstrapping Annotated Language Data", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Adam Lopez, Michael Nossal, Rebecca Hwa, and Philip Resnik. 2002. Word-level alignment for multilingual resource acquisition. In Proceedings of the Workshop on Linguistic Knowledge Acquisition and Representa- tion: Bootstrapping Annotated Language Data.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Automatic construction of clean broad-coverage translation lexicons", "authors": [ { "first": "I", "middle": [], "last": "", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Melamed", "suffix": "" } ], "year": 1996, "venue": "Proceedings of the 2nd Conference of the Association for Machine Translation in the Americas", "volume": "", "issue": "", "pages": "125--134", "other_ids": {}, "num": null, "urls": [], "raw_text": "I. Dan Melamed. 1996. Automatic construction of clean broad-coverage translation lexicons. In Proceedings of the 2nd Conference of the Association for Machine Translation in the Americas, pages 125-134, Mon- treal.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Models of translational equivalence among words", "authors": [ { "first": "I", "middle": [], "last": "", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Melamed", "suffix": "" } ], "year": 2000, "venue": "Computational Linguistics", "volume": "26", "issue": "2", "pages": "221--249", "other_ids": {}, "num": null, "urls": [], "raw_text": "I. Dan Melamed. 2000. Models of translational equiv- alence among words. Computational Linguistics, 26(2):221-249, June.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Dependency syntax: theory and practice", "authors": [ { "first": "Igor", "middle": [ "A" ], "last": "", "suffix": "" } ], "year": 1987, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Igor A. Mel'\u010duk. 1987. Dependency syntax: theory and practice. State University of New York Press, Albany.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "A bestfirst alignment algorithm for automatic extraction of transfer mappings from bilingual corpora", "authors": [ { "first": "Arul", "middle": [], "last": "Menezes", "suffix": "" }, { "first": "D", "middle": [], "last": "Stephen", "suffix": "" }, { "first": "", "middle": [], "last": "Richardson", "suffix": "" } ], "year": 2001, "venue": "Proceedings of the Workshop on Data-Driven Machine Translation", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Arul Menezes and Stephen D. Richardson. 2001. A best- first alignment algorithm for automatic extraction of transfer mappings from bilingual corpora. In Proceed- ings of the Workshop on Data-Driven Machine Trans- lation.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Improved statistical alignment models", "authors": [ { "first": "J", "middle": [], "last": "Franz", "suffix": "" }, { "first": "Hermann", "middle": [], "last": "Och", "suffix": "" }, { "first": "", "middle": [], "last": "Ney", "suffix": "" } ], "year": 2000, "venue": "Proceedings of the 38th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "440--447", "other_ids": {}, "num": null, "urls": [], "raw_text": "Franz J. Och and Hermann Ney. 2000. Improved sta- tistical alignment models. In Proceedings of the 38th Annual Meeting of the Association for Computational Linguistics, pages 440-447, Hong Kong, China, Octo- ber.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Hmm-based word alignment in statistical translation", "authors": [ { "first": "S", "middle": [], "last": "Vogel", "suffix": "" }, { "first": "H", "middle": [], "last": "Ney", "suffix": "" }, { "first": "C", "middle": [], "last": "Tillmann", "suffix": "" } ], "year": 1996, "venue": "Proceedings of COLING-96", "volume": "", "issue": "", "pages": "836--841", "other_ids": {}, "num": null, "urls": [], "raw_text": "S. Vogel, H. Ney, and C. Tillmann. 1996. Hmm-based word alignment in statistical translation. In Proceed- ings of COLING-96, pages 836-841, Copenhagen, Denmark, August.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Stochastic inversion transduction grammars and bilingual parsing of parallel corpora", "authors": [ { "first": "Dekai", "middle": [], "last": "Wu", "suffix": "" } ], "year": 1997, "venue": "Computational Linguistics", "volume": "23", "issue": "3", "pages": "374--403", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dekai Wu. 1997. Stochastic inversion transduction grammars and bilingual parsing of parallel corpora. Computational Linguistics, 23(3):374-403.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "A syntax-based statistical translation model", "authors": [ { "first": "Kenji", "middle": [], "last": "Yamada", "suffix": "" }, { "first": "Kevin", "middle": [], "last": "Knight", "suffix": "" } ], "year": 2001, "venue": "Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "523--530", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kenji Yamada and Kevin Knight. 2001. A syntax-based statistical translation model. In Meeting of the Associ- ation for Computational Linguistics, pages 523-530.", "links": null } }, "ref_entries": { "FIGREF0": { "text": "us to calculate the probability of this alignment as:", "type_str": "figure", "uris": null, "num": null }, "FIGREF2": { "text": "An Example of Cohesion Constraint", "type_str": "figure", "uris": null, "num": null }, "TABREF0": { "text": "", "num": null, "type_str": "table", "content": "
: Example Probability Tables
(a) Link Counts and Probabilities
", "html": null }, "TABREF1": { "text": "the host discovers all the devices", "num": null, "type_str": "table", "content": "
obj
detsubjpre det
1234 56
1 234 56
l' h\u00f4te rep\u00e8re tous les p\u00e9riph\u00e9riques
the hostlocate all theperipherals
", "html": null }, "TABREF2": { "text": "Comparison with(Och and Ney, 2000)", "num": null, "type_str": "table", "content": "
MethodPrec Rec AER
Ours95.7 86.48.7
IBM-4 F\u2192E80.5 91.2 15.6
IBM-4 E\u2192F80.0 90.8 16.0
IBM-4 Intersect 95.7 85.69.0
IBM-4 Refined85.9 92.3 11.7
", "html": null }, "TABREF3": { "text": "Evaluation of Features", "num": null, "type_str": "table", "content": "
AlgorithmPrec Rec AER
initial (\u03c6 2
", "html": null } } } }