{ "paper_id": "P11-1042", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T08:47:18.992466Z" }, "title": "Unsupervised Word Alignment with Arbitrary Features", "authors": [ { "first": "Chris", "middle": [], "last": "Dyer", "suffix": "", "affiliation": { "laboratory": "", "institution": "Language Technologies Institute Carnegie Mellon University Pittsburgh", "location": { "postCode": "15213", "region": "PA", "country": "USA" } }, "email": "cdyer@cs.cmu.edu" }, { "first": "Jonathan", "middle": [], "last": "Clark", "suffix": "", "affiliation": { "laboratory": "", "institution": "Language Technologies Institute Carnegie Mellon University Pittsburgh", "location": { "postCode": "15213", "region": "PA", "country": "USA" } }, "email": "jhclark@cs.cmu.edu" }, { "first": "Alon", "middle": [], "last": "Lavie", "suffix": "", "affiliation": { "laboratory": "", "institution": "Language Technologies Institute Carnegie Mellon University Pittsburgh", "location": { "postCode": "15213", "region": "PA", "country": "USA" } }, "email": "alavie@cs.cmu.edu" }, { "first": "Noah", "middle": [ "A" ], "last": "Smith", "suffix": "", "affiliation": { "laboratory": "", "institution": "Language Technologies Institute Carnegie Mellon University Pittsburgh", "location": { "postCode": "15213", "region": "PA", "country": "USA" } }, "email": "nasmith@cs.cmu.edu" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "We introduce a discriminatively trained, globally normalized, log-linear variant of the lexical translation models proposed by Brown et al. (1993). In our model, arbitrary, nonindependent features may be freely incorporated, thereby overcoming the inherent limitation of generative models, which require that features be sensitive to the conditional independencies of the generative process. However, unlike previous work on discriminative modeling of word alignment (which also permits the use of arbitrary features), the parameters in our models are learned from unannotated parallel sentences, rather than from supervised word alignments. Using a variety of intrinsic and extrinsic measures, including translation performance, we show our model yields better alignments than generative baselines in a number of language pairs.", "pdf_parse": { "paper_id": "P11-1042", "_pdf_hash": "", "abstract": [ { "text": "We introduce a discriminatively trained, globally normalized, log-linear variant of the lexical translation models proposed by Brown et al. (1993). In our model, arbitrary, nonindependent features may be freely incorporated, thereby overcoming the inherent limitation of generative models, which require that features be sensitive to the conditional independencies of the generative process. However, unlike previous work on discriminative modeling of word alignment (which also permits the use of arbitrary features), the parameters in our models are learned from unannotated parallel sentences, rather than from supervised word alignments. Using a variety of intrinsic and extrinsic measures, including translation performance, we show our model yields better alignments than generative baselines in a number of language pairs.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Word alignment is an important subtask in statistical machine translation which is typically solved in one of two ways. The more common approach uses a generative translation model that relates bilingual string pairs using a latent alignment variable to designate which source words (or phrases) generate which target words. The parameters in these models can be learned straightforwardly from parallel sentences using EM, and standard inference techniques can recover most probable alignments (Brown et al., 1993) . This approach is attractive because it only requires parallel training data. An alternative to the generative approach uses a discriminatively trained alignment model to predict word alignments in the parallel corpus. Discriminative models are attractive because they can incorporate arbitrary, overlapping features, meaning that errors observed in the predictions made by the model can be addressed by engineering new and better features. Unfortunately, both approaches are problematic, but in different ways.", "cite_spans": [ { "start": 494, "end": 514, "text": "(Brown et al., 1993)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In the case of discriminative alignment models, manual alignment data is required for training, which is problematic for at least three reasons. Manual alignments are notoriously difficult to create and are available only for a handful of language pairs. Second, manual alignments impose a commitment to a particular preprocessing regime; this can be problematic since the optimal segmentation for translation often depends on characteristics of the test set or size of the available training data (Habash and Sadat, 2006) or may be constrained by requirements of other processing components, such parsers. Third, the \"correct\" alignment annotation for different tasks may vary: for example, relatively denser or sparser alignments may be optimal for different approaches to (downstream) translation model induction (Lopez, 2008; Fraser, 2007) .", "cite_spans": [ { "start": 498, "end": 522, "text": "(Habash and Sadat, 2006)", "ref_id": "BIBREF16" }, { "start": 816, "end": 829, "text": "(Lopez, 2008;", "ref_id": "BIBREF26" }, { "start": 830, "end": 843, "text": "Fraser, 2007)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Generative models have a different limitation: the joint probability of a particular setting of the random variables must factorize according to steps in a process that successively \"generates\" the values of the variables. At each step, the probability of some value being generated may depend only on the generation history (or a subset thereof), and the possible values a variable will take must form a locally normalized conditional probability distribution (CPD). While these locally normalized CPDs may be pa-rameterized so as to make use of multiple, overlapping features (Berg-Kirkpatrick et al., 2010) , the requirement that models factorize according to a particular generative process imposes a considerable restriction on the kinds of features that can be incorporated. When Brown et al. (1993) wanted to incorporate a fertility model to create their Models 3 through 5, the generative process used in Models 1 and 2 (where target words were generated one by one from source words independently of each other) had to be abandoned in favor of one in which each source word had to first decide how many targets it would generate. 1 In this paper, we introduce a discriminatively trained, globally normalized log-linear model of lexical translation that can incorporate arbitrary, overlapping features, and use it to infer word alignments. Our model enjoys the usual benefits of discriminative modeling (e.g., parameter regularization, wellunderstood learning algorithms), but is trained entirely from parallel sentences without gold-standard word alignments. Thus, it addresses the two limitations of current word alignment approaches.", "cite_spans": [ { "start": 578, "end": 609, "text": "(Berg-Kirkpatrick et al., 2010)", "ref_id": "BIBREF1" }, { "start": 786, "end": 805, "text": "Brown et al. (1993)", "ref_id": "BIBREF6" }, { "start": 1139, "end": 1140, "text": "1", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "This paper is structured as follows. We begin by introducing our model ( \u00a72), and follow this with a discussion of tractability, parameter estimation, and inference using finite-state techniques ( \u00a73). We then describe the specific features we used ( \u00a74) and provide experimental evaluation of the model, showing substantial improvements in three diverse language pairs ( \u00a75). We conclude with an analysis of related prior work ( \u00a76) and a general discussion ( \u00a78).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this section, we develop a conditional model p(t | s) that, given a source language sentence s with length m = |s|, assigns probabilities to a target sentence t with length n, where each word t j is an element in the finite target vocabulary \u2126. We begin by using the chain rule to factor this probability into two components, a translation model and a length model.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model", "sec_num": "2" }, { "text": "p(t | s) = p(t, n | s) = p(t | s, n) translation model \u00d7 p(n | s) length model", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model", "sec_num": "2" }, { "text": "In the translation model, we then assume that each word t j is a translation of one source word, or a special null token. We therefore introduce a latent alignment variable a = a 1 , a 2 , . . . , a n \u2208 [0, m] n , where a j = 0 represents a special null token.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model", "sec_num": "2" }, { "text": "p(t | s, n) = a p(t, a | s, n)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model", "sec_num": "2" }, { "text": "So far, our model is identical to that of (Brown et al., 1993) ; however, we part ways here. Rather than using the chain rule to further decompose this probability and motivate opportunities to make independence assumptions, we use a log-linear model with parameters \u03b8 \u2208 R k and feature vector function H that maps each tuple a, s, t, n into R k to model p(t, a | s, n) directly:", "cite_spans": [ { "start": 42, "end": 62, "text": "(Brown et al., 1993)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Model", "sec_num": "2" }, { "text": "p \u03b8 (t, a | s, n) = exp \u03b8 H(t, a, s, n) Z \u03b8 (s, n)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model", "sec_num": "2" }, { "text": ", where", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model", "sec_num": "2" }, { "text": "Z \u03b8 (s, n) = t \u2208\u2126 n a exp \u03b8 H(t , a , s, n)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model", "sec_num": "2" }, { "text": "Under some reasonable assumptions (a finite target vocabulary \u2126 and that all \u03b8 k < \u221e), the partition function Z \u03b8 (s, n) will always take on finite values, guaranteeing that p(t, a | s, n) is a proper probability distribution. So far, we have said little about the length model. Since our intent here is to use the model for alignment, where both the target length and target string are observed, it will not be necessary to commit to any length model, even during training.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model", "sec_num": "2" }, { "text": "The model introduced in the previous section is extremely general, and it can incorporate features sensitive to any imaginable aspects of a sentence pair and their alignment, from linguistically inspired (e.g., an indicator feature for whether both the source and target sentences contain a verb), to the mundane (e.g., the probability of the sentence pair and alignment under Model 1), to the absurd (e.g., an indicator if s and t are palindromes of each other).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Tractability, Learning, and Inference", "sec_num": "3" }, { "text": "However, while our model can make use of arbitrary, overlapping features, when designing feature functions it is necessary to balance expressiveness and the computational complexity of the inference algorithms used to reason under models that incorporate these features. 2 To understand this tradeoff, we assume that the random variables being modeled (t, a) are arranged into an undirected graph G such that the vertices represent the variables and the edges are specified so that the feature function H decomposes linearly over all the cliques C in G,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Tractability, Learning, and Inference", "sec_num": "3" }, { "text": "H(t, a, s, n) = C h(t C , a C , s, n) ,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Tractability, Learning, and Inference", "sec_num": "3" }, { "text": "where t C and a C are the components associated with subgraph C and h(\u2022) is a local feature vector function. In general, exact inference is exponential in the width of tree-decomposition of G, but, given a fixed width, they can be solved in polynomial time using dynamic programming. For example, when the graph has a sequential structure, exact inference can be carried out using the familiar forwardbackward algorithm (Lafferty et al., 2001) .", "cite_spans": [ { "start": 420, "end": 443, "text": "(Lafferty et al., 2001)", "ref_id": "BIBREF21" } ], "ref_spans": [], "eq_spans": [], "section": "Tractability, Learning, and Inference", "sec_num": "3" }, { "text": "Although our features look at more structure than this, they are designed to keep treewidth low, meaning exact inference is still possible with dynamic programming. Figure 1 gives a graphical representation of our model as well as the more familiar generative (directed) variants. The edge set in the depicted graph is determined by the features that we use ( \u00a74).", "cite_spans": [], "ref_spans": [ { "start": 165, "end": 173, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Tractability, Learning, and Inference", "sec_num": "3" }, { "text": "To learn the parameters of our model, we select the \u03b8 * that minimizes the 1 regularized conditional loglikelihood of a set of training data T :", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Parameter Learning", "sec_num": "3.1" }, { "text": "L(\u03b8) = \u2212 s,t \u2208T log a p \u03b8 (t, a | s, n) + \u03b2 k |\u03b8 k | .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Parameter Learning", "sec_num": "3.1" }, { "text": "Because of the 1 penalty, this objective is not everywhere differentiable, but the gradient with respect to the parameters of the log-likelihood term is as follows.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Parameter Learning", "sec_num": "3.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "\u2202L \u2202\u03b8 = s,t \u2208T E p \u03b8 (a|s,t,n) [H(\u2022)] \u2212 E p \u03b8 (t,a|s,n) [H(\u2022)]", "eq_num": "(1)" } ], "section": "Parameter Learning", "sec_num": "3.1" }, { "text": "To optimize L, we employ an online method that approximates 1 regularization and only depends on the gradient of the unregularized objective (Tsuruoka et al., 2009) . This method is quite attractive since it is only necessary to represent the active features, meaning impractically large feature spaces can be searched provided the regularization strength is sufficiently high. Additionally, not only has this technique been shown to be very effective for optimizing convex objectives, but evidence suggests that the stochasticity of online algorithms often results in better solutions than batch optimizers for nonconvex objectives (Liang and Klein, 2009) . On account of the latent alignment variable in our model, L is non-convex (as is the likelihood objective of the generative variant).", "cite_spans": [ { "start": 141, "end": 164, "text": "(Tsuruoka et al., 2009)", "ref_id": "BIBREF38" }, { "start": 633, "end": 656, "text": "(Liang and Klein, 2009)", "ref_id": "BIBREF23" } ], "ref_spans": [], "eq_spans": [], "section": "Parameter Learning", "sec_num": "3.1" }, { "text": "To choose the regularization strength \u03b2 and the initial learning rate \u03b7 0 , 3 we trained several models on a 10,000-sentence-pair subset of the French-English Hansards, and chose values that minimized the alignment error rate, as evaluated on a 447 sentence set of manually created alignments (Mihalcea and Pedersen, 2003) . For the remainder of the experiments, we use the values we obtained, \u03b2 = 0.4 and \u03b7 0 = 0.3.", "cite_spans": [ { "start": 293, "end": 322, "text": "(Mihalcea and Pedersen, 2003)", "ref_id": "BIBREF27" } ], "ref_spans": [], "eq_spans": [], "section": "Parameter Learning", "sec_num": "3.1" }, { "text": "We now describe how to use weighted finite-state automata (WFSAs) to compute the quantities necessary for training. We begin by describing the ideal WFSA representing the full translation search space, which we call the discriminative neighborhood, and then discuss strategies for reducing its size in the next section, since the full model is prohibitively large, even with small data sets.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Inference with WFSAs", "sec_num": "3.2" }, { "text": "For each training instance s, t , the contribution to the gradient (Equation 1) is the difference in two vectors of expectations. The first term is the expected value of H(\u2022) when observing s, n, t and letting a range over all possible alignments. The second is the expectation of the same function, but observing only s, n and letting t and a take on any possible values (i.e., all possible translations of length n and all their possible alignments to s). To compute these expectations, we can construct a WFSA representing the discriminative neighborhood, the set \u2126 n \u00d7[0, m] n , such that every path from the start state to goal yields a pair t , a with weight", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Inference with WFSAs", "sec_num": "3.2" }, { "text": "a 1 a 2 a 3 a n t 1 t 2 t 3 t n s n", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Inference with WFSAs", "sec_num": "3.2" }, { "text": "Fully directed model (Brown et al., 1993; Vogel et al., 1996; Berg-Kirkpatrick et al., 2010) Our model H(t , a, s, n). With our feature set ( \u00a74), number of states in this WFSA is O(m \u00d7 n) since at each target index j, there is a different state for each possible index of the source word translated at position j \u2212 1. 4 Once the WFSA representing the discriminative neighborhood is built, we use the forward-backward algorithm to compute the second expectation term. We then intersect the WFSA with an unweighted FSA representing the target sentence t (because of the restricted structure of our WFSA, this amounts to removing edges), and finally run the forwardbackward algorithm on the resulting WFSA to compute the first expectation.", "cite_spans": [ { "start": 21, "end": 41, "text": "(Brown et al., 1993;", "ref_id": "BIBREF6" }, { "start": 42, "end": 61, "text": "Vogel et al., 1996;", "ref_id": "BIBREF40" }, { "start": 62, "end": 92, "text": "Berg-Kirkpatrick et al., 2010)", "ref_id": "BIBREF1" }, { "start": 319, "end": 320, "text": "4", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Inference with WFSAs", "sec_num": "3.2" }, { "text": "... a 1 a 2 a 3 a n t 1 t 2 t 3 t n s n ... ... ...", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Inference with WFSAs", "sec_num": "3.2" }, { "text": "The WFSA we constructed requires m \u00d7 |\u2126| transitions between all adjacent states, which is impractically large. We can reduce the number of edges by restricting the set of words that each source word can translate into. Thus, the model will not discriminate among all candidate target strings in \u2126 n , but rather in \u2126 n s , where", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Shrinking the Discriminative Neighborhood", "sec_num": "3.3" }, { "text": "\u2126 s = m i=1 \u2126 s i , and", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Shrinking the Discriminative Neighborhood", "sec_num": "3.3" }, { "text": "where \u2126 s is the set of target words that s may translate into. 5 We consider four different definitions of \u2126 s : (1) the baseline of the full target vocabulary, (2) the set of all target words that co-occur in sentence pairs containing s, (3) the most probable words under IBM Model 1 that are above a threshold, and (4) the same Model 1, except we add a sparse symmetric Dirichlet prior (\u03b1 = 0.01) on the translation distributions and use the empirical Bayes (EB) method to infer a point estimate, using variational inference. Table 1 compares the average per-sentence time required to run the inference algorithm described above under these four different definitions of \u2126 s on a 10,000 sentence subset of the Hansards French-English corpus that includes manual word alignments. While our constructions guarantee that all references are reachable even in the reduced neighborhoods, not all alignments between source and target are possible. The last column is the oracle AER. Although EB variant of Model 1 neighborhood is slightly more expensive to do inference with than regular Model 1, we use it because it has a lower oracle AER. 6 During alignment prediction (rather than during training) for a sentence pair s, t , it is possible to further restrict \u2126 s to be just the set of words occurring in t, making extremely fast inference possible (comparable to that of the generative HMM alignment model).", "cite_spans": [ { "start": 64, "end": 65, "text": "5", "ref_id": null }, { "start": 1138, "end": 1139, "text": "6", "ref_id": null } ], "ref_spans": [ { "start": 529, "end": 536, "text": "Table 1", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Shrinking the Discriminative Neighborhood", "sec_num": "3.3" }, { "text": "\u2126 s time (s) \u2193 s |\u2126 s | \u2193 AER \u2193 = \u2126 22.4 86.0M 0.0 co-occ. 8.9 0.68M 0.0 Model 1 0.2 0.38M 6.2 EB-Model 1 1.0 0.15M 2.9", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Shrinking the Discriminative Neighborhood", "sec_num": "3.3" }, { "text": "Feature engineering lets us encode knowledge about what aspects of a translation derivation are useful in predicting whether it is good or not. In this section we discuss the features we used in our model. Many of these were taken from the discriminative alignment modeling literature, but we also note that our features can be much more fine-grained than those used in supervised alignment modeling, since we learn our models from a large amount of parallel data, rather than a small number of manual alignments.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Features", "sec_num": "4" }, { "text": "Word association features. Word association features are at the heart of all lexical translation models, whether generative or discriminative. In addition to fine-grained boolean indicator features s a j , t j for pair types, we have several orthographic features: identity, prefix identity, and an orthographic similarity measure designed to be informative for predicting the translation of named entities in languages that use similar alphabets. 7 It has the property that source-target pairs of long words that are similar are given a higher score than word pairs that are short and similar (dissimilar pairs have a score near zero, regardless of length). We also include \"global\" association scores that are precomputed by looking at the full training data: Dice's coefficient (discretized), which we use to measure association strength between pairs of source and target word types across sentence pairs (Dice, 1945) , IBM Model 1 forward and reverse probabilities, and the geometric mean of the Model 1 forward and reverse probabilities. Finally, we also cluster the source and target vocabularies (Och, 1999) and include class pair indicator features, which can learn generalizations that, e.g., \"nouns tend to translate into nouns but not modal verbs.\"", "cite_spans": [ { "start": 909, "end": 921, "text": "(Dice, 1945)", "ref_id": "BIBREF11" }, { "start": 1104, "end": 1115, "text": "(Och, 1999)", "ref_id": "BIBREF30" } ], "ref_spans": [], "eq_spans": [], "section": "Features", "sec_num": "4" }, { "text": "Positional features. Following Blunsom and Cohn (2006) , we include features indicating closeness to the alignment matrix diagonal, h(a j , j, m, n) = a j m \u2212 j n . We also conjoin this feature with the source word class type indicator to enable the model to learn that certain word types are more or less likely to favor a location on the diagonal (e.g. Urdu's sentence-final verbs).", "cite_spans": [ { "start": 31, "end": 54, "text": "Blunsom and Cohn (2006)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Features", "sec_num": "4" }, { "text": "Source features. Some words are functional elements that fulfill purely grammatical roles and should not be the \"source\" of a translation. For example, Romance languages require a preposition in the formation of what could be a noun-noun compound in English, thus, it may be useful to learn not to translate certain words (i.e. they should not participate in alignment links), or to have a bias to translate others. To capture this intuition we include an indicator feature that fires each time a source vocabulary item (and source word class) participates in an alignment link.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Features", "sec_num": "4" }, { "text": "Source path features. One class of particularly useful features assesses the goodness of the alignment 'path' through the source sentence (Vogel et al., 1996) . Although assessing the predicted path requires using nonlocal features, since each a j \u2208 [0, m] and m is relatively small, features can be sensitive to a wider context than is often practical.", "cite_spans": [ { "start": 138, "end": 158, "text": "(Vogel et al., 1996)", "ref_id": "BIBREF40" } ], "ref_spans": [], "eq_spans": [], "section": "Features", "sec_num": "4" }, { "text": "We use many overlapping source path features, some of which are sensitive to the distance and direction of the jump between a j\u22121 and a j , and others which are sensitive to the word pair these two points define, and others that combine all three elements. The features we use include a discretized jump distance, the discretized jump conjoined with an indicator feature for the target length n, the discretized jump feature conjoined with the class of s a j , and the discretized jump feature conjoined with the class of s a j and s a j\u22121 . To discretize the features we take a log transform (base 1.3) of the jump width and let an indicator feature fire for the closest integer. In addition to these distance-dependent features, we also include indicator features that fire on bigrams s a j\u22121 , s a j and their word classes. Thus, this feature can capture our intuition that, e.g., adjectives are more likely to come before or after a noun in different languages.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Features", "sec_num": "4" }, { "text": "Target string features. Features sensitive to multiple values in the predicted target string or latent alignment variable must be handled carefully for the sake of computational tractability. While features that look at multiple source words can be computed linearly in the number of source words considered (since the source string is always observable), features that look at multiple target words require exponential time and space! 8 However, by grouping the t j 's into coarse equivalence classes and looking at small numbers of variables, it is possible to incorporate such features. We include a feature that fires when a word translates as itself (for example, a name or a date, which occurs in languages that share the same alphabet) in position j, but then is translated again (as something else) in position j \u2212 1 or j + 1.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Features", "sec_num": "4" }, { "text": "We now turn to an empirical assessment of our model. Using various datasets, we evaluate the performance of the models' intrinsic quality and theirtheir alignments' contribution to a standard machine translation system. We make use of parallel corpora from languages with very different typologies: a small (0.8M words) Chinese-English corpus from the tourism and travel domain (Takezawa et al., 2002) , a corpus of Czech-English news commentary (3.1M words), 9 and an Urdu-English corpus (2M words) provided by NIST for the 2009 Open MT Evaluation. These pairs were selected since each poses different alignment challenges (word or-der in Chinese and Urdu, morphological complexity in Czech, and a non-alphabetic writing system in Chinese), and confining ourselves to these relatively small corpora reduced the engineering overhead of getting an implementation up and running. Future work will explore the scalability characteristics and limits of the model.", "cite_spans": [ { "start": 378, "end": 401, "text": "(Takezawa et al., 2002)", "ref_id": "BIBREF36" } ], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "5" }, { "text": "For each language pair, we train two log-linear translation models as described above ( \u00a73), once with English as the source and once with English as the target language. For a baseline, we use the Giza++ toolkit (Och and Ney, 2003) to learn Model 4, again in both directions. We symmetrize the alignments from both model types using the grow-diag-final-and heuristic (Koehn et al., 2003) producing, in total, six alignment sets. We evaluate them both intrinsically and in terms of their performance in a translation system.", "cite_spans": [ { "start": 213, "end": 232, "text": "(Och and Ney, 2003)", "ref_id": "BIBREF29" }, { "start": 368, "end": 388, "text": "(Koehn et al., 2003)", "ref_id": "BIBREF18" } ], "ref_spans": [], "eq_spans": [], "section": "Methodology", "sec_num": "5.1" }, { "text": "Since we only have gold alignments for Czech-English (Bojar and Prokopov\u00e1, 2006) , we can report alignment error rate (AER; Och and Ney, 2003) only for this pair. However, we offer two further measures that we believe are suggestive and that do not require gold alignments. One is the average alignment \"fertility\" of source words that occur only a single time in the training data (so-called hapax legomena). This assesses the impact of a typical alignment problem observed in generative models trained to maximize likelihood: infrequent source words act as \"garbage collectors\", with many target words aligned to them (the word dislike in the Model 4 alignment in Figure 2 is an example). Thus, we expect lower values of this measure to correlate with better alignments. The second measure is the number of rule types learned in the grammar induction process used for translation that match the translation test sets. 10 While neither a decrease in the average singleton fertility nor an increase in the number of rules induced guarantees better alignment quality, we believe it is reasonable to assume that they are positively correlated.", "cite_spans": [ { "start": 53, "end": 80, "text": "(Bojar and Prokopov\u00e1, 2006)", "ref_id": "BIBREF5" }, { "start": 124, "end": 142, "text": "Och and Ney, 2003)", "ref_id": "BIBREF29" } ], "ref_spans": [ { "start": 666, "end": 674, "text": "Figure 2", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "Methodology", "sec_num": "5.1" }, { "text": "For the translation experiments in each language pair, we make use of the cdec decoder (Dyer et al., 2010) , inducing a hierarchical phrase based translation grammar from two sets of symmetrized alignments using the method described by Chiang (2007) . Additionally, recent work that has demonstrated that extracting rules from n-best alignments has value (Liu et al., 2009; Venugopal et al., 2008) . We therefore define a third condition where rules are extracted from the corpus under both the Model 4 and discriminative alignments and merged to form a single grammar. We incorporate a 3-gram language model learned from the target side of the training data as well as 50M supplemental words of monolingual training data consisting of sentences randomly sampled from the English Gigaword, version 4. In the small Chinese-English travel domain experiment, we just use the LM estimated from the bitext. The parameters of the translation model were tuned using \"hypergraph\" minimum error rate training (MERT) to maximize BLEU on a held-out development set (Kumar et al., 2009) . Results are reported using case-insensitive BLEU (Papineni et al., 2002) , METEOR 11 (Lavie and Denkowski, 2009) , and TER (Snover et al., 2006) , with the number of references varying by task. Since MERT is a nondeterministic optimization algorithm and results can vary considerably between runs, we follow Clark et al. (2011) and report the average score and standard deviation of 5 independent runs, 30 in the case of Chinese-English, since observed variance was higher.", "cite_spans": [ { "start": 87, "end": 106, "text": "(Dyer et al., 2010)", "ref_id": "BIBREF13" }, { "start": 236, "end": 249, "text": "Chiang (2007)", "ref_id": "BIBREF7" }, { "start": 355, "end": 373, "text": "(Liu et al., 2009;", "ref_id": "BIBREF24" }, { "start": 374, "end": 397, "text": "Venugopal et al., 2008)", "ref_id": "BIBREF39" }, { "start": 1054, "end": 1074, "text": "(Kumar et al., 2009)", "ref_id": "BIBREF20" }, { "start": 1126, "end": 1149, "text": "(Papineni et al., 2002)", "ref_id": "BIBREF31" }, { "start": 1162, "end": 1189, "text": "(Lavie and Denkowski, 2009)", "ref_id": "BIBREF22" }, { "start": 1200, "end": 1221, "text": "(Snover et al., 2006)", "ref_id": "BIBREF35" }, { "start": 1385, "end": 1404, "text": "Clark et al. (2011)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Methodology", "sec_num": "5.1" }, { "text": "Czech-English. Czech-English poses problems for word alignment models since, unlike English, Czech words have a complex inflectional morphology, and the syntax permits relatively free word order. For this language pair, we evaluate alignment error rate using the manual alignment corpus described by Bojar and Prokopov\u00e1 (2006) . Table 2 summarizes the results.", "cite_spans": [ { "start": 300, "end": 326, "text": "Bojar and Prokopov\u00e1 (2006)", "ref_id": "BIBREF5" } ], "ref_spans": [ { "start": 329, "end": 336, "text": "Table 2", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Experimental Results", "sec_num": "5.2" }, { "text": "Chinese-English. Chinese-English poses a different set of problems for alignment. While Chinese words have rather simple morphology, the Chinese writing system renders our orthographic features useless. Despite these challenges, the Chinese re- sults in Table 3 show the same pattern of results as seen in Czech-English. Urdu-English. Urdu-English is a more challenging language pair for word alignment than the previous two we have considered. The parallel data is drawn from numerous genres, and much of it was acquired automatically, making it quite noisy. So our models must not only predict good translations, they must cope with bad ones as well. Second, there has been no previous work on discriminative modeling of Urdu, since, to our knowledge, no manual alignments have been created. Finally, unlike English, Urdu is a head-final language: not only does it have SOV word order, but rather than prepositions, it has post-positions, which follow the nouns they modify, meaning its large scale word order is substantially different from that of English. Table 4 demonstrates the same pattern of improving results with our alignment model. ", "cite_spans": [], "ref_spans": [ { "start": 254, "end": 261, "text": "Table 3", "ref_id": "TABREF2" }, { "start": 1061, "end": 1068, "text": "Table 4", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "Experimental Results", "sec_num": "5.2" }, { "text": "The quantitative results presented in this section strongly suggest that our modeling approach produces better alignments. In this section, we try to characterize how the model is doing what it does and what it has learned. Because of the 1 regularization, the number of active (non-zero) features in the inferred models is small, relative to the number of features considered during training. The number of active features ranged from about 300k for the small Chinese-English corpus to 800k for Urdu-English, which is less than one tenth of the available features in both cases. In all models, the coarse features (Model 1 probabilities, Dice coefficient, coarse positional features, etc.) typically received weights with large magnitudes, but finer features also played an important role. Language pair differences manifested themselves in many ways in the models that were learned. For example, orthographic features were (unsurprisingly) more valuable in Czech-English, with their largely overlapping alphabets, than in Chinese or Urdu. Examining the more fine-grained features is also illuminating. Table 5 shows the most highly weighted source path bigram features on the three models where English was the source language, and in each, we may observe some interesting characteristics of the target language. Left-most is English-Czech. At first it may be surprising that words like since and that have a highly weighted feature for transitioning to themselves. However, Czech punctuation rules require that relative clauses and subordinating conjunctions be preceded by a comma (which is only optional or outright forbidden in English), therefore our model translates these words twice, once to produce the comma, and a second time to produce the lexical item. The middle column is the English-Chinese model. In the training data, many of the sentences are questions directed to a second person, you. However, Chinese questions do not invert and the subject remains in the canonical first position, thus the transition from the start of sentence to you is highly weighted. Finally, Figure 2 illustrates how Model 4 (left) and our discriminative model (right) align an English-Urdu sentence pair (the English side is being conditioned on in both models). A reflex of Urdu's head-final word order is seen in the list of most highly weighted bigrams, where a path through the English source where verbs that transition to end-of-sentence periods are predictive of good translations into Urdu. ", "cite_spans": [], "ref_spans": [ { "start": 1104, "end": 1111, "text": "Table 5", "ref_id": "TABREF4" }, { "start": 2089, "end": 2095, "text": "Figure", "ref_id": null } ], "eq_spans": [], "section": "Analysis", "sec_num": "5.3" }, { "text": "The literature contains numerous descriptions of discriminative approaches to word alignment motivated by the desire to be able to incorporate multiple, overlapping knowledge sources (Ayan et al., 2005; Moore, 2005; Taskar et al., 2005; Blunsom and Cohn, 2006; Haghighi et al., 2009; Liu et al., 2010; . This body of work has been an invaluable source of useful features. Several authors have dealt with the problem training log-linear models in an unsu- pervised setting. The contrastive estimation technique proposed by Smith and Eisner (2005) is globally normalized (and thus capable of dealing with arbitrary features), and closely related to the model we developed; however, they do not discuss the problem of word alignment. Berg-Kirkpatrick et al. (2010) learn locally normalized log-linear models in a generative setting. Globally normalized discriminative models with latent variables (Quattoni et al., 2004) have been used for a number of language processing problems, including MT (Dyer and Resnik, 2010; Blunsom et al., 2008a) . However, this previous work relied on translation grammars constructed using standard generative word alignment processes.", "cite_spans": [ { "start": 183, "end": 202, "text": "(Ayan et al., 2005;", "ref_id": "BIBREF0" }, { "start": 203, "end": 215, "text": "Moore, 2005;", "ref_id": "BIBREF28" }, { "start": 216, "end": 236, "text": "Taskar et al., 2005;", "ref_id": "BIBREF37" }, { "start": 237, "end": 260, "text": "Blunsom and Cohn, 2006;", "ref_id": "BIBREF2" }, { "start": 261, "end": 283, "text": "Haghighi et al., 2009;", "ref_id": "BIBREF17" }, { "start": 284, "end": 301, "text": "Liu et al., 2010;", "ref_id": "BIBREF25" }, { "start": 522, "end": 545, "text": "Smith and Eisner (2005)", "ref_id": "BIBREF34" }, { "start": 731, "end": 761, "text": "Berg-Kirkpatrick et al. (2010)", "ref_id": "BIBREF1" }, { "start": 894, "end": 917, "text": "(Quattoni et al., 2004)", "ref_id": "BIBREF32" }, { "start": 992, "end": 1015, "text": "(Dyer and Resnik, 2010;", "ref_id": "BIBREF13" }, { "start": 1016, "end": 1038, "text": "Blunsom et al., 2008a)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "6" }, { "text": "While we have demonstrated that this model can be substantially useful, it is limited in some important ways which are being addressed in ongoing work. First, training is expensive, and we are exploring alternatives to the conditional likelihood objective that is currently used, such as contrastive neighborhoods advocated by (Smith and Eisner, 2005) . Additionally, there is much evidence that non-local features like the source word fertility are (cf. IBM Model 3) useful for translation and alignment modeling. To be truly general, it must be possible to utilize such features. Unfortunately, features like this that depend on global properties of the alignment vector, a, make the inference problem NP-hard, and approximations are necessary. Fortunately, there is much recent work on approximate inference techniques for incorporating nonlocal features (Blunsom et al., 2008b; Gimpel and Smith, 2009; Cromi\u00e8res and Kurohashi, 2009; Weiss and Taskar, 2010) , suggesting that this problem too can be solved using established techniques.", "cite_spans": [ { "start": 327, "end": 351, "text": "(Smith and Eisner, 2005)", "ref_id": "BIBREF34" }, { "start": 858, "end": 881, "text": "(Blunsom et al., 2008b;", "ref_id": "BIBREF4" }, { "start": 882, "end": 905, "text": "Gimpel and Smith, 2009;", "ref_id": "BIBREF15" }, { "start": 906, "end": 936, "text": "Cromi\u00e8res and Kurohashi, 2009;", "ref_id": "BIBREF9" }, { "start": 937, "end": 960, "text": "Weiss and Taskar, 2010)", "ref_id": "BIBREF41" } ], "ref_spans": [], "eq_spans": [], "section": "Future Work", "sec_num": "7" }, { "text": "We have introduced a globally normalized, loglinear lexical translation model that can be trained discriminatively using only parallel sentences, which we apply to the problem of word alignment. Our approach addresses two important shortcomings of previous work: (1) that local normalization of generative models constrains the features that can be used, and (2) that previous discriminatively trained word alignment models required supervised alignments. According to a variety of measures in a variety of translation tasks, this model produces superior alignments to generative approaches. Furthermore, the features learned by our model reveal interesting characteristics of the language pairs being modeled.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "8" }, { "text": "ber W911NF-10-1-0533; and the National Science Foundation through grants IIS-0844507, IIS-0915187, IIS-0713402, and IIS-0915327 and through TeraGrid resources provided by the Pittsburgh Supercomputing Center under grant number TG-DBS110003. We thank Ond\u0159ej Bojar for providing the Czech-English alignment data, and three anonymous reviewers for their detailed suggestions and comments on an earlier draft of this paper.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "8" }, { "text": "Moore (2005) likewise uses this example to motivate the need for models that support arbitrary, overlapping features.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "One way to understand expressiveness is in terms of independence assumptions, of course. Research in graphical models has done much to relate independence assumptions to the complexity of inference algorithms(Koller and Friedman, 2009).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "For the other free parameters of the algorithm, we use the default values recommended byTsuruoka et al. (2009).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "States contain a bit more information than the index of the previous source word, for example, there is some additional information about the previous translation decision that is passed forward. However, the concept of splitting states to guarantee distinct paths for different values of non-local features is well understood by NLP and machine translation researchers, and the necessary state structure should be obvious from the feature description.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Future work will explore alternative formulations of the discriminative neighborhood with the goal of further improving inference efficiency.Smith and Eisner (2005) show that good performance on unsupervised syntax learning is possible even when learning from very small discriminative neighborhoods, and we posit that the same holds here.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "We included all translations whose probability was within a factor of 10 \u22124 of the highest probability translation.7 In experiments with Urdu, which uses an Arabic-derived script, the orthographic feature was computed after first applying a heuristic Romanization, which made the orthographic forms somewhat comparable.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "This is of course what makes history-based language model integration an inference challenge in translation.9 http://statmt.org/wmt10", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "This measure does not assess whether the rule types are good or bad, but it does suggest that the system's coverage is greater.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Meteor 1.0 with exact, stem, synonymy, and paraphrase modules and HTER parameters.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "This work was supported in part by the DARPA GALE program; the U. S. Army Research Laboratory and the U. S. Army Research Office under contract/grant num-", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "NeurAlign: combining word alignments using neural networks", "authors": [ { "first": "N", "middle": [ "F" ], "last": "Ayan", "suffix": "" }, { "first": "B", "middle": [ "J" ], "last": "Dorr", "suffix": "" }, { "first": "C", "middle": [], "last": "Monz", "suffix": "" } ], "year": 2005, "venue": "Proc. of HLT-EMNLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "N. F. Ayan, B. J. Dorr, and C. Monz. 2005. NeurAlign: combining word alignments using neural networks. In Proc. of HLT-EMNLP.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Painless unsupervised learning with features", "authors": [ { "first": "T", "middle": [], "last": "Berg-Kirkpatrick", "suffix": "" }, { "first": "A", "middle": [], "last": "Bouchard-C\u00f4t\u00e9", "suffix": "" }, { "first": "J", "middle": [], "last": "Denero", "suffix": "" }, { "first": "D", "middle": [], "last": "Klein", "suffix": "" } ], "year": 2010, "venue": "Proc. of NAACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "T. Berg-Kirkpatrick, A. Bouchard-C\u00f4t\u00e9, J. DeNero, and D. Klein. 2010. Painless unsupervised learning with features. In Proc. of NAACL.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Discriminative word alignment with conditional random fields", "authors": [ { "first": "P", "middle": [], "last": "Blunsom", "suffix": "" }, { "first": "T", "middle": [], "last": "Cohn", "suffix": "" } ], "year": 2006, "venue": "Proc. of ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "P. Blunsom and T. Cohn. 2006. Discriminative word alignment with conditional random fields. In Proc. of ACL.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "A discriminative latent variable model for statistical machine translation", "authors": [ { "first": "P", "middle": [], "last": "Blunsom", "suffix": "" }, { "first": "T", "middle": [], "last": "Cohn", "suffix": "" }, { "first": "M", "middle": [], "last": "Osborne", "suffix": "" } ], "year": 2008, "venue": "Proc. of ACL-HLT", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "P. Blunsom, T. Cohn, and M. Osborne. 2008a. A dis- criminative latent variable model for statistical ma- chine translation. In Proc. of ACL-HLT.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Probabilistic inference for machine translation", "authors": [ { "first": "P", "middle": [], "last": "Blunsom", "suffix": "" }, { "first": "T", "middle": [], "last": "Cohn", "suffix": "" }, { "first": "M", "middle": [], "last": "Osborne", "suffix": "" } ], "year": 2008, "venue": "Proc. of EMNLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "P. Blunsom, T. Cohn, and M. Osborne. 2008b. Proba- bilistic inference for machine translation. In Proc. of EMNLP 2008.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Czech-English word alignment", "authors": [ { "first": "O", "middle": [], "last": "Bojar", "suffix": "" }, { "first": "M", "middle": [], "last": "Prokopov\u00e1", "suffix": "" } ], "year": 2006, "venue": "Proc. of LREC", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "O. Bojar and M. Prokopov\u00e1. 2006. Czech-English word alignment. In Proc. of LREC.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "The mathematics of statistical machine translation: parameter estimation", "authors": [ { "first": "P", "middle": [ "F" ], "last": "Brown", "suffix": "" }, { "first": "V", "middle": [ "J" ], "last": "Della Pietra", "suffix": "" }, { "first": "S", "middle": [ "A" ], "last": "Della Pietra", "suffix": "" }, { "first": "R", "middle": [ "L" ], "last": "Mercer", "suffix": "" } ], "year": 1993, "venue": "Computational Linguistics", "volume": "19", "issue": "2", "pages": "263--311", "other_ids": {}, "num": null, "urls": [], "raw_text": "P. F. Brown, V. J. Della Pietra, S. A. Della Pietra, and R. L. Mercer. 1993. The mathematics of statistical machine translation: parameter estimation. Computa- tional Linguistics, 19(2):263-311.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Hierarchical phrase-based translation", "authors": [ { "first": "D", "middle": [], "last": "Chiang", "suffix": "" } ], "year": 2007, "venue": "Computational Linguistics", "volume": "33", "issue": "2", "pages": "201--228", "other_ids": {}, "num": null, "urls": [], "raw_text": "D. Chiang. 2007. Hierarchical phrase-based translation. Computational Linguistics, 33(2):201-228.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Better hypothesis testing for statistical machine translation: Controlling for optimizer instability", "authors": [ { "first": "J", "middle": [], "last": "Clark", "suffix": "" }, { "first": "C", "middle": [], "last": "Dyer", "suffix": "" }, { "first": "A", "middle": [], "last": "Lavie", "suffix": "" }, { "first": "N", "middle": [ "A" ], "last": "Smith", "suffix": "" } ], "year": 2011, "venue": "Proc. of ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "J. Clark, C. Dyer, A. Lavie, and N. A. Smith. 2011. Bet- ter hypothesis testing for statistical machine transla- tion: Controlling for optimizer instability. In Proc. of ACL.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "An alignment algorithm using belief propagation and a structure-based distortion model", "authors": [ { "first": "F", "middle": [], "last": "Cromi\u00e8res", "suffix": "" }, { "first": "S", "middle": [], "last": "Kurohashi", "suffix": "" } ], "year": 2009, "venue": "Proc. of EACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "F. Cromi\u00e8res and S. Kurohashi. 2009. An alignment al- gorithm using belief propagation and a structure-based distortion model. In Proc. of EACL.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Discriminative modeling of extraction sets for machine translation", "authors": [ { "first": "J", "middle": [], "last": "Denero", "suffix": "" }, { "first": "D", "middle": [], "last": "Klein", "suffix": "" } ], "year": 2010, "venue": "Proc. of ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "J. DeNero and D. Klein. 2010. Discriminative modeling of extraction sets for machine translation. In Proc. of ACL.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Measures of the amount of ecologic association between species", "authors": [ { "first": "L", "middle": [ "R" ], "last": "Dice", "suffix": "" } ], "year": 1945, "venue": "Journal of Ecology", "volume": "26", "issue": "", "pages": "297--302", "other_ids": {}, "num": null, "urls": [], "raw_text": "L. R. Dice. 1945. Measures of the amount of eco- logic association between species. Journal of Ecology, 26:297-302.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Context-free reordering, finite-state translation", "authors": [ { "first": "C", "middle": [], "last": "Dyer", "suffix": "" }, { "first": "P", "middle": [], "last": "Resnik", "suffix": "" } ], "year": 2010, "venue": "Proc. of NAACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "C. Dyer and P. Resnik. 2010. Context-free reordering, finite-state translation. In Proc. of NAACL.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "cdec: A decoder, alignment, and learning framework for finite-state and context-free translation models", "authors": [ { "first": "A", "middle": [], "last": "Dyer", "suffix": "" }, { "first": "J", "middle": [], "last": "Lopez", "suffix": "" }, { "first": "J", "middle": [], "last": "Ganitkevitch", "suffix": "" }, { "first": "F", "middle": [], "last": "Weese", "suffix": "" }, { "first": "P", "middle": [], "last": "Ture", "suffix": "" }, { "first": "H", "middle": [], "last": "Blunsom", "suffix": "" }, { "first": "V", "middle": [], "last": "Setiawan", "suffix": "" }, { "first": "P", "middle": [], "last": "Eidelman", "suffix": "" }, { "first": "", "middle": [], "last": "Resnik", "suffix": "" } ], "year": 2010, "venue": "Proc. of ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dyer, A. Lopez, J. Ganitkevitch, J. Weese, F. Ture, P. Blunsom, H. Setiawan, V. Eidelman, and P. Resnik. 2010. cdec: A decoder, alignment, and learning framework for finite-state and context-free translation models. In Proc. of ACL (demonstration session).", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Improved Word Alignments for Statistical Machine Translation", "authors": [ { "first": "A", "middle": [], "last": "Fraser", "suffix": "" } ], "year": 2007, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "A. Fraser. 2007. Improved Word Alignments for Statis- tical Machine Translation. Ph.D. thesis, University of Southern California.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Cube summing, approximate inference with non-local features, and dynamic programming without semirings", "authors": [ { "first": "K", "middle": [], "last": "Gimpel", "suffix": "" }, { "first": "N", "middle": [ "A" ], "last": "Smith", "suffix": "" } ], "year": 2009, "venue": "Proc. of EACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "K. Gimpel and N. A. Smith. 2009. Cube summing, ap- proximate inference with non-local features, and dy- namic programming without semirings. In Proc. of EACL.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Arabic preprocessing schemes for statistical machine translation", "authors": [ { "first": "N", "middle": [], "last": "Habash", "suffix": "" }, { "first": "F", "middle": [], "last": "Sadat", "suffix": "" } ], "year": 2006, "venue": "Proc. of NAACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "N. Habash and F. Sadat. 2006. Arabic preprocessing schemes for statistical machine translation. In Proc. of NAACL, New York.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Better word alignments with supervised ITG models", "authors": [ { "first": "A", "middle": [], "last": "Haghighi", "suffix": "" }, { "first": "J", "middle": [], "last": "Blitzer", "suffix": "" }, { "first": "J", "middle": [], "last": "Denero", "suffix": "" }, { "first": "D", "middle": [], "last": "Klein", "suffix": "" } ], "year": 2009, "venue": "Proc. of ACL-IJCNLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "A. Haghighi, J. Blitzer, J. DeNero, and D. Klein. 2009. Better word alignments with supervised ITG models. In Proc. of ACL-IJCNLP.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Statistical phrase-based translation", "authors": [ { "first": "P", "middle": [], "last": "Koehn", "suffix": "" }, { "first": "F", "middle": [ "J" ], "last": "Och", "suffix": "" }, { "first": "D", "middle": [], "last": "Marcu", "suffix": "" } ], "year": 2003, "venue": "Proc. of NAACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "P. Koehn, F. J. Och, and D. Marcu. 2003. Statistical phrase-based translation. In Proc. of NAACL.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Probabilistic Graphical Models: Principles and Techniques", "authors": [ { "first": "D", "middle": [], "last": "Koller", "suffix": "" }, { "first": "N", "middle": [], "last": "Friedman", "suffix": "" } ], "year": 2009, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "D. Koller and N. Friedman. 2009. Probabilistic Graphi- cal Models: Principles and Techniques. MIT Press.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Efficient minimum error rate training and minimum bayesrisk decoding for translation hypergraphs and lattices", "authors": [ { "first": "S", "middle": [], "last": "Kumar", "suffix": "" }, { "first": "W", "middle": [], "last": "Macherey", "suffix": "" }, { "first": "C", "middle": [], "last": "Dyer", "suffix": "" }, { "first": "F", "middle": [], "last": "Och", "suffix": "" } ], "year": 2009, "venue": "Proc. of ACL-IJCNLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "S. Kumar, W. Macherey, C. Dyer, and F. Och. 2009. Effi- cient minimum error rate training and minimum bayes- risk decoding for translation hypergraphs and lattices. In Proc. of ACL-IJCNLP.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Conditional random fields: Probabilistic models for segmenting and labeling sequence data", "authors": [ { "first": "J", "middle": [], "last": "Lafferty", "suffix": "" }, { "first": "A", "middle": [], "last": "Mccallum", "suffix": "" }, { "first": "F", "middle": [], "last": "Pereira", "suffix": "" } ], "year": 2001, "venue": "Proc. of ICML", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "J. Lafferty, A. McCallum, and F. Pereira. 2001. Con- ditional random fields: Probabilistic models for seg- menting and labeling sequence data. In Proc. of ICML.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "The METEOR metric for automatic evaluation of machine translation", "authors": [ { "first": "A", "middle": [], "last": "Lavie", "suffix": "" }, { "first": "M", "middle": [], "last": "Denkowski", "suffix": "" } ], "year": 2009, "venue": "Machine Translation Journal", "volume": "23", "issue": "2-3", "pages": "105--115", "other_ids": {}, "num": null, "urls": [], "raw_text": "A. Lavie and M. Denkowski. 2009. The METEOR metric for automatic evaluation of machine translation. Ma- chine Translation Journal, 23(2-3):105-115.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Online EM for unsupervised models", "authors": [ { "first": "P", "middle": [], "last": "Liang", "suffix": "" }, { "first": "D", "middle": [], "last": "Klein", "suffix": "" } ], "year": 2009, "venue": "Proc. of NAACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "P. Liang and D. Klein. 2009. Online EM for unsuper- vised models. In Proc. of NAACL.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Weighted alignment matrices for statistical machine translation", "authors": [ { "first": "Y", "middle": [], "last": "Liu", "suffix": "" }, { "first": "T", "middle": [], "last": "Xia", "suffix": "" }, { "first": "X", "middle": [], "last": "Xiao", "suffix": "" }, { "first": "Q", "middle": [], "last": "Liu", "suffix": "" } ], "year": 2009, "venue": "Proc. of EMNLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Y. Liu, T. Xia, X. Xiao, and Q. Liu. 2009. Weighted alignment matrices for statistical machine translation. In Proc. of EMNLP.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Discriminative word alignment by linear modeling", "authors": [ { "first": "Q", "middle": [], "last": "Liu", "suffix": "" }, { "first": "S", "middle": [], "last": "Liu", "suffix": "" }, { "first": "", "middle": [], "last": "Lin", "suffix": "" } ], "year": 2010, "venue": "Computational Linguistics", "volume": "36", "issue": "3", "pages": "303--339", "other_ids": {}, "num": null, "urls": [], "raw_text": "Liu, Q. Liu, and S. Lin. 2010. Discriminative word alignment by linear modeling. Computational Lin- guistics, 36(3):303-339.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Tera-scale translation models via pattern matching", "authors": [ { "first": "A", "middle": [], "last": "Lopez", "suffix": "" } ], "year": 2008, "venue": "Proc. of COLING", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "A. Lopez. 2008. Tera-scale translation models via pat- tern matching. In Proc. of COLING.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "An evaluation exercise for word alignment", "authors": [ { "first": "R", "middle": [], "last": "Mihalcea", "suffix": "" }, { "first": "T", "middle": [], "last": "Pedersen", "suffix": "" } ], "year": 2003, "venue": "Proc. of the Workshop on Building and Using Parallel Texts", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "R. Mihalcea and T. Pedersen. 2003. An evaluation exer- cise for word alignment. In Proc. of the Workshop on Building and Using Parallel Texts.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "A discriminative framework for bilingual word alignment", "authors": [ { "first": "R", "middle": [ "C" ], "last": "Moore", "suffix": "" } ], "year": 2005, "venue": "Proc. of HLT-EMNLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "R. C. Moore. 2005. A discriminative framework for bilingual word alignment. In Proc. of HLT-EMNLP.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "A systematic comparison of various statistical alignment models", "authors": [ { "first": "F", "middle": [], "last": "Och", "suffix": "" }, { "first": "H", "middle": [], "last": "Ney", "suffix": "" } ], "year": 2003, "venue": "Computational Linguistics", "volume": "29", "issue": "1", "pages": "19--51", "other_ids": {}, "num": null, "urls": [], "raw_text": "F. Och and H. Ney. 2003. A systematic comparison of various statistical alignment models. Computational Linguistics, 29(1):19-51.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "An efficient method for determining bilingual word classes", "authors": [ { "first": "F", "middle": [], "last": "Och", "suffix": "" } ], "year": 1999, "venue": "Proc. of EACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "F. Och. 1999. An efficient method for determining bilin- gual word classes. In Proc. of EACL.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "BLEU: a method for automatic evaluation of machine translation", "authors": [ { "first": "K", "middle": [], "last": "Papineni", "suffix": "" }, { "first": "S", "middle": [], "last": "Roukos", "suffix": "" }, { "first": "T", "middle": [], "last": "Ward", "suffix": "" }, { "first": "W.-J", "middle": [], "last": "Zhu", "suffix": "" } ], "year": 2002, "venue": "Proc. of ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "K. Papineni, S. Roukos, T. Ward, and W.-J. Zhu. 2002. BLEU: a method for automatic evaluation of machine translation. In Proc. of ACL.", "links": null }, "BIBREF32": { "ref_id": "b32", "title": "Conditional random fields for object recognition", "authors": [ { "first": "A", "middle": [], "last": "Quattoni", "suffix": "" }, { "first": "M", "middle": [], "last": "Collins", "suffix": "" }, { "first": "T", "middle": [], "last": "Darrell", "suffix": "" } ], "year": 2004, "venue": "NIPS 17", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "A. Quattoni, M. Collins, and T. Darrell. 2004. Condi- tional random fields for object recognition. In NIPS 17.", "links": null }, "BIBREF33": { "ref_id": "b33", "title": "Discriminative word alignment with a function word reordering model", "authors": [ { "first": "H", "middle": [], "last": "Setiawan", "suffix": "" }, { "first": "C", "middle": [], "last": "Dyer", "suffix": "" }, { "first": "P", "middle": [], "last": "Resnik", "suffix": "" } ], "year": 2010, "venue": "Proc. of EMNLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "H. Setiawan, C. Dyer, and P. Resnik. 2010. Discrimina- tive word alignment with a function word reordering model. In Proc. of EMNLP.", "links": null }, "BIBREF34": { "ref_id": "b34", "title": "Contrastive estimation: training log-linear models on unlabeled data", "authors": [ { "first": "N", "middle": [ "A" ], "last": "Smith", "suffix": "" }, { "first": "J", "middle": [], "last": "Eisner", "suffix": "" } ], "year": 2005, "venue": "Proc. of ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "N. A. Smith and J. Eisner. 2005. Contrastive estimation: training log-linear models on unlabeled data. In Proc. of ACL.", "links": null }, "BIBREF35": { "ref_id": "b35", "title": "A study of translation edit rate with targeted human annotation", "authors": [ { "first": "M", "middle": [], "last": "Snover", "suffix": "" }, { "first": "B", "middle": [ "J" ], "last": "Dorr", "suffix": "" }, { "first": "R", "middle": [], "last": "Schwartz", "suffix": "" }, { "first": "L", "middle": [], "last": "Micciulla", "suffix": "" }, { "first": "J", "middle": [], "last": "Makhoul", "suffix": "" } ], "year": 2006, "venue": "Proc. of AMTA", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "M. Snover, B. J. Dorr, R. Schwartz, L. Micciulla, and J. Makhoul. 2006. A study of translation edit rate with targeted human annotation. In Proc. of AMTA.", "links": null }, "BIBREF36": { "ref_id": "b36", "title": "Toward a broad-coverage bilingual corpus for speech translation of travel conversations in the real world", "authors": [ { "first": "T", "middle": [], "last": "Takezawa", "suffix": "" }, { "first": "E", "middle": [], "last": "Sumita", "suffix": "" }, { "first": "F", "middle": [], "last": "Sugaya", "suffix": "" }, { "first": "H", "middle": [], "last": "Yamamoto", "suffix": "" }, { "first": "S", "middle": [], "last": "Yamamoto", "suffix": "" } ], "year": 2002, "venue": "Proc. of LREC", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "T. Takezawa, E. Sumita, F. Sugaya, H. Yamamoto, and S. Yamamoto. 2002. Toward a broad-coverage bilin- gual corpus for speech translation of travel conversa- tions in the real world. In Proc. of LREC.", "links": null }, "BIBREF37": { "ref_id": "b37", "title": "A discriminative matching approach to word alignment", "authors": [ { "first": "B", "middle": [], "last": "Taskar", "suffix": "" }, { "first": "S", "middle": [], "last": "Lacoste-Julien", "suffix": "" }, { "first": "D", "middle": [], "last": "Klein", "suffix": "" } ], "year": 2005, "venue": "Proc. of HLT-EMNLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "B. Taskar, S. Lacoste-Julien, and D. Klein. 2005. A dis- criminative matching approach to word alignment. In Proc. of HLT-EMNLP.", "links": null }, "BIBREF38": { "ref_id": "b38", "title": "Stochastic gradient descent training for l 1 -regularized loglinear models with cumulative penalty", "authors": [ { "first": "Y", "middle": [], "last": "Tsuruoka", "suffix": "" }, { "first": "J", "middle": [], "last": "Tsujii", "suffix": "" }, { "first": "S", "middle": [], "last": "Ananiadou", "suffix": "" } ], "year": 2009, "venue": "Proc. of ACL-IJCNLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Y. Tsuruoka, J. Tsujii, and S. Ananiadou. 2009. Stochas- tic gradient descent training for l 1 -regularized log- linear models with cumulative penalty. In Proc. of ACL-IJCNLP.", "links": null }, "BIBREF39": { "ref_id": "b39", "title": "Wider pipelines: n-best alignments and parses in MT training", "authors": [ { "first": "A", "middle": [], "last": "Venugopal", "suffix": "" }, { "first": "A", "middle": [], "last": "Zollmann", "suffix": "" }, { "first": "N", "middle": [ "A" ], "last": "Smith", "suffix": "" }, { "first": "S", "middle": [], "last": "Vogel", "suffix": "" } ], "year": 2008, "venue": "Proc. of AMTA", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "A. Venugopal, A. Zollmann, N. A. Smith, and S. Vogel. 2008. Wider pipelines: n-best alignments and parses in MT training. In Proc. of AMTA.", "links": null }, "BIBREF40": { "ref_id": "b40", "title": "HMM-based word alignment in statistical translation", "authors": [ { "first": "H", "middle": [], "last": "Vogel", "suffix": "" }, { "first": "C", "middle": [], "last": "Ney", "suffix": "" }, { "first": "", "middle": [], "last": "Tillmann", "suffix": "" } ], "year": 1996, "venue": "Proc. of COLING", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Vogel, H. Ney, and C. Tillmann. 1996. HMM-based word alignment in statistical translation. In Proc. of COLING.", "links": null }, "BIBREF41": { "ref_id": "b41", "title": "Structured prediction cascades", "authors": [ { "first": "D", "middle": [], "last": "Weiss", "suffix": "" }, { "first": "B", "middle": [], "last": "Taskar", "suffix": "" } ], "year": 2010, "venue": "Proc. of AISTATS", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "D. Weiss and B. Taskar. 2010. Structured prediction cas- cades. In Proc. of AISTATS.", "links": null } }, "ref_entries": { "FIGREF0": { "type_str": "figure", "uris": null, "num": null, "text": "A graphical representation of a conventional generative lexical translation model (left) and our model with an undirected translation model. For clarity, the observed node s (representing the full source sentence) is drawn in multiple locations. The dashed lines indicate a dependency on a deterministic mapping of t j (not its complete value)." }, "FIGREF2": { "type_str": "figure", "uris": null, "num": null, "text": "Example English-Urdu alignment under IBM Model 4 (left) and our discriminative model (right). Model 4 displays two characteristic errors: garbage collection and an overly-strong monotonicity bias. Whereas our model does not exhibit these problems, and in fact, makes no mistakes in the alignment." }, "TABREF0": { "content": "", "html": null, "text": "Comparison of alternative definitions \u2126 s (arrows indicate whether higher or lower is better).", "num": null, "type_str": "table" }, "TABREF1": { "content": "
Model 4e | f24.84.1
f | e33.66.6
sym.23.42.7993,953
Our model e | f21.92.3
f | e29.33.8
sym.20.51.61,146,677
Alignment BLEU \u2191 METEOR \u2191TER \u2193
Model 416.3\u00b10.246.1\u00b10.167.4\u00b10.3
Our model 16.5\u00b10.146.8\u00b10.167.0\u00b10.2
Both17.4\u00b10.147.7\u00b10.166.3\u00b10.5
", "html": null, "text": "Czech-English experimental results.\u03c6 sing. is the average fertility of singleton source words. \u2193\u03c6 sing. \u2193 # rules \u2191", "num": null, "type_str": "table" }, "TABREF2": { "content": "
\u03c6 sing. \u2193 # rules \u2191
Model 4e | f4.4
f | e3.9
sym.3.652,323
Our model e | f3.5
f | e2.6
sym.3.154,077
Alignment BLEU \u2191 METEOR \u2191TER \u2193
Model 456.5\u00b10.373.0\u00b10.429.1\u00b10.3
Our model 57.2\u00b10.873.8\u00b10.429.3\u00b11.1
Both59.1\u00b10.674.8\u00b10.727.6\u00b10.5
", "html": null, "text": "Chinese-English experimental results.", "num": null, "type_str": "table" }, "TABREF3": { "content": "
\u03c6 sing. \u2193 # rules \u2191
Model 4e | f6.5
f | e8.0
sym.3.2244,570
Our model e | f4.8
f | e8.3
sym.2.3260,953
Alignment BLEU \u2191 METEOR \u2191TER \u2193
Model 423.3\u00b10.249.3\u00b10.268.8\u00b10.8
Our model 23.4\u00b10.249.7\u00b10.167.7\u00b10.2
Both24.1\u00b10.250.6\u00b10.166.8\u00b10.5
", "html": null, "text": "Urdu-English experimental results.", "num": null, "type_str": "table" }, "TABREF4": { "content": "
Bigram \u03b8 kBigram \u03b8 kBigram \u03b8 k
. /s3.08. /s2.67. /s1.87
like like 1.19? ?2.25s this 1.24
one of 1.06s please 2.01will . 1.17
\" .0.95much ?1.61are . 1.16
that that0.92s if1.58is . 1.09
is but 0.92thank you1.47is that 1.00
since since 0.84s sorry1.46have . 0.97
s when 0.83s you1.45has . 0.96
, how 0.83please like1.24was . 0.91
, not0.83s this1.19will /s0.88
", "html": null, "text": "The most highly weighted source path bigram features in the English-Czech, -Chinese, and -Urdu models.", "num": null, "type_str": "table" } } } }